[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ULE over DVB-H




ii) DVB-H uses the "MPE-FEC" structure, which is built upon MPE, and
uses the table_id (first byte in MPE header) to discriminate between
data bytes and FEC overhead bytes.
Possible solution: directly port ULE to carry FEC frames (no action
needed) and use of a new Typefield value to declare FEC data.

That sounds reasonable and seems straight-forward to be implemented.
In my opinion one more option could be to treat data+FEC as special kind
of SNDU in which case another type value (data+FEC) might be
appropriate. However, that would make the CRC being calculated over the
whole data+FEC, and FEC recovery would start even if only FEC is
corrupted. Again an extension header might be necessary to describe the
type of FEC?
Yet another option would be to send data and FEC in different IP packets
and do the recovery on top of IP. But that changes the semantics of the
current FEC mode completely.

No, don't worry. The second option is exactly what DVB-H does. Data and FEC are transmitted separately, so that the system is backwards compatible. A FEC-ignorant receiver just decapsulates the datagram and discards the FEC bytes (which are transmitted in separate SNDUs). So, CRC is calculated over the data bytes and over the RS overhead seperately. No problem on that.

But unfortunately, I forgot something: DVB-H also uses the section_number and last_section_number of the MPE header. Seems like DVB decided to revive all unused fields of MPE! Prior to calculating the FEC, a bunch of incoming IP datagrams are organized in a two-dimensional array, over each row of which the R-S is calculated. These two one-byte fields of MPE show the position which the encapsulated datagram has within this array. It is essential for proper FEC decoding. So, this issue also has to be solved for ULE (maybe with an extension header again, it just has to be incorporated in ULE).



I see a generic problem with ULE and DVB-H related to the implicit
packing of ULE, which is a little more tricky to implement following the
current ULE spec where packing timeout would interfere with time slicing.

Indeed. Time slicing is very tricky itself, not mentioning its possible conflict with packing timeout. It has nothing to do with traditional TDM with time slots, since both burst length and inter-burst time can be altered on a per-burst basis. In order to make an efficient bandwidth management and not throw capacity away (by transmitting empty bursts), you need a rather smart and adaptive algorithm at the encapsulator. If you decide to use ULE packing simultaneously, things become even more complicated. That's an interesting research field, though.

G.Gardikis