LS criterion's task is to distribute bandwidth according to
specified class hierarchy. Contrary to RT criterion, there're no
comparisons between current real time and virtual time - the
decision is based solely on direct comparison of virtual times of
all active subclasses - the one with the smallest vt wins and
gets scheduled. One immediate conclusion from this fact is that
absolute values don't matter - only ratios between them (so for
example, two children classes with simple linear 1Mbit service
curves will get the same treatment from LS criterion's
perspective, as if they were 5Mbit). The other conclusion is,
that in perfectly fluid system with linear curves, all virtual
times across whole class hierarchy would be equal.
Why is VC defined in term of virtual time (and what is it)?
Imagine an example: class A with two children - A1 and A2, both
with let's say 10Mbit SCs. If A2 is idle, A1 receives all the
bandwidth of A (and update its V() in the process). When A2
becomes active, A1's virtual time is already far later than A2's
one. Considering the type of decision made by LS criterion, A1
would become idle for a long time. We can workaround this
situation by adjusting virtual time of the class becoming active
- we do that by getting such time "up to date". HFSC uses a mean
of the smallest and the biggest virtual time of currently active
children fit for sending. As it's not real time anymore
(excluding trivial case of situation where all classes become
active at the same time, and never become idle), it's called
virtual time.
Such approach has its price though. The problem is analogous to
what was presented in previous section and is caused by
non-linearity of service curves:
1) either it's impossible to guarantee service curves and
satisfy fairness during certain time periods:
Recall the example from RT section, slightly modified (with
3Mbit slopes instead of 2Mbit ones):
• 1st class - 3Mbit for 100ms, then 7Mbit (convex - 1st
slope < 2nd slope)
• 2nd class - 7Mbit for 100ms, then 3Mbit (concave - 1st
slope > 2nd slope)
They sum up nicely to 10Mbit - the interface's capacity. But
if we wanted to only use LS for guarantees and fairness - it
simply won't work. In LS context, only V() is used for making
decision which class to schedule. If the 2nd class becomes
active when the 1st one is in its second slope, the fairness
will be preserved - ratio will be 1:1 (7Mbit:7Mbit), but LS
itself is of course unable to guarantee the absolute values
themselves - as it would have to go beyond of what the
interface is capable of.
2) and/or it's impossible to guarantee service curves of all
classes at the same time [fairly or not]:
This is similar to the above case, but a bit more subtle. We
will consider two subtrees, arbitrated by their common (root
here) parent:
R (root) - 10Mbit
A - 7Mbit, then 3Mbit
A1 - 5Mbit, then 2Mbit
A2 - 2Mbit, then 1Mbit
B - 3Mbit, then 7Mbit
R arbitrates between left subtree (A) and right (B). Assume
that A2 and B are constantly backlogged, and at some later
point A1 becomes backlogged (when all other classes are in
their 2nd linear part).
What happens now? B (choice made by R) will always get 7 Mbit
as R is only (obviously) concerned with the ratio between its
direct children. Thus A subtree gets 3Mbit, but its children
would want (at the point when A1 became backlogged) 5Mbit +
1Mbit. That's of course impossible, as they can only get
3Mbit due to interface limitation.
In the left subtree - we have the same situation as
previously (fair split between A1 and A2, but violated
guarantees), but in the whole tree - there's no fairness (B
got 7Mbit, but A1 and A2 have to fit together in 3Mbit) and
there's no guarantees for all classes (only B got what it
wanted). Even if we violated fairness in the A subtree and
set A2's service curve to 0, A1 would still not get the
required bandwidth.