Separate service curves for LS and RT criteria can lead to
certain traps that come from "fighting" between ideal linksharing
and enforced realtime guarantees. Those situations didn't exist
in original HFSC paper, where specifying separate LS / RT service
curves was not discussed.
Consider an interface with a 10Mbit capacity, with the following
leaf classes:
A - ls 5.0Mbit, rt 8Mbit
B - ls 2.5Mbit
C - ls 2.5Mbit
Imagine A and C are constantly backlogged. As B is idle, A and C
would divide bandwidth in 2:1 ratio, considering LS service curve
(so in theory - 6.66 and 3.33). Alas RT criterion takes priority,
so A will get 8Mbit and LS will be able to compensate class C for
only 2 Mbit - this will cause discrepancy between virtual times
of A and C.
Assume this situation lasts for a long time with no idle periods,
and suddenly B becomes active. B's virtual time will be updated
to (A's vt + C's vt)/2, effectively landing in the middle between
A's and C's virtual time. The effect - B, having no RT
guarantees, will be punished and will not be allowed to transfer
until C's virtual time catches up.
If the interface had a higher capacity, for example 100Mbit, this
example would behave perfectly fine though.
Let's look a bit closer at the above example - it "cleverly"
invalidates one of the basic things LS criterion tries to achieve
- equality of all virtual times across class hierarchy. Leaf
classes without RT service curves are literally left to their own
fate (governed by messed up virtual times).
Also, it doesn't make much sense. Class A will always be
guaranteed up to 8Mbit, and this is more than any absolute
bandwidth that could happen from its LS criterion (excluding
trivial case of only A being active). If the bandwidth taken by A
is smaller than absolute value from LS criterion, the unused part
will be automatically assigned to other active classes (as A has
idling periods in such case). The only "advantage" is, that even
in case of low bandwidth on average, bursts would be handled at
the speed defined by RT criterion. Still, if extra speed is
needed (e.g. due to latency), non linear service curves should be
used in such case.
In the other words: the LS criterion is meaningless in the above
example.
You can quickly "workaround" it by making sure each leaf class
has RT service curve assigned (thus guaranteeing all of them will
get some bandwidth), but it doesn't make it any more valid.
Keep in mind - if you use nonlinear curves and irregularities
explained above happen only in the first segment, then there's
little wrong with "overusing" RT curve a bit:
A - ls 5.0Mbit, rt 9Mbit/30ms, then 1Mbit
B - ls 2.5Mbit
C - ls 2.5Mbit
Here, the vt of A will "spike" in the initial period, but then A
will never get more than 1Mbit until B & C catch up. Then
everything will be back to normal.