开发者

How can I size/match a record component while reading in its data from a Stream in Ada?

Very specific question but we have some good Ada people here so I would like to hear thoughts. I’m reading data from a file used for embedded systems. The data chunks I’m working with always have a predicable header format …but there’s one problem…the data payload length is given as part of the format right before the payload occurs. So I don’t know the payload size until I read a certain byte at a known position in the header. The chunks occur one after the other.

Literally the format is ([ ] used for readability):

[2byte TAG] [1byte length of payload (LSB)] [1byte length of payload (MSB)] [PAYLOAD]

The payload is human-readable configuration text. The next TAG will be the next two bytes after the previous payload and so on until I don’t see any matching a known TAG after the last payload. Then I know I’m done.

I am reading this out of a file using a direct_IO stream but I might switch to a more general stream and just start performing conversions.

I’d like to store all of this in a simple record at the end of the day I’m looking for a technique where I can read in the data and reading the 3rd byte I now know the payload size and can size the array or String component to hold the data right at that moment while the record is already acting as the read buffer. That is I need to read the TAG and length data in, so I’d like to store them immediately in a record. I want to store the payload in that same record if I can. I could consider using an access type and dynamically creating the payload storage but that means I have to stop the read after 3 bytes, the do work, and then continue. Plus that means writing will have the same problem since the object's representation no longer matches the expect chunk format.

I was thinking about trying to use a record to hold all this with a discriminant for the payload size and use a representation clause on that record to mimic the exact above described format. With the discriminant being the third byte in both the record and the data chunk I might be able to do a conversation and simply "lay" the data into the object…but I don’t have a way to size the component when I instantiate the record without already reading the TAG and length. I assume I cannot read AND create the object at the same time, so to create the object I need a length. While I could keep fiddling with the file position and read what I need, then go back to the beginning and then create and consume the entire chunk I know there has got to be a better “Ada” way.

Is there a way that I can use the representation clause to fill the header into the record and when the discriminant is filled with a value from the data the record array or String Payload component size would be set?

Also this isn’t just for reading, I need to find a nice way of outputting this exact format into the file when t开发者_运维知识库he configuration changes. So I was hoping to use a representation clause to match the underlying format so I could literally “write” the object out to the file and it would be the correct format. I was hoping to do the same for reads.

All the Ada read examples I’m so far seen are with records of know length (or known max length) where the record reads in a static sized data chunk.

Does someone have an example or can point me in the right direction of how I might use this approach in dealing with this variably size payload?

Thanks for any help you can provide,

-Josh


Essentially the way this is done is to do a partial read, enough to get the number of bytes, then read the rest of the data into a discriminated record.

Something like the following in pseudo-Ada:

type Payloads is array (Payload_Sizes range <>) of Your_One_Byte_Payload_Type;

type Data (Payload_Length : Payload_Sizes) is
   record
      Tag : Tag_Type;
      Payload : Payloads(1 .. Payload_Length);
   end record;

for Data use record
   Tag            at 0 range 0 .. 15;
   Payload_Length at 2 range 0 .. 15;
   -- Omit a rep spec for Payload
end record;

Typically the compiler will locate the Payload data immediately following the last rep-spec'ed field, but you'll need to verify this with the vendor, or do some test programs. (There may be a more explicit way to specify this, and I'm open to having this answer updated if someone provides a workable approach.)

And do not provide a default for the Payload_Length discriminant, that would cause instances of the record to always reserve the maximum amount of storage needed for the largest value.

Then, in your data reading code, something along the lines of:

loop
   Get(Data_File, Tag);
   Get(Data_File, Payload_Length)

   declare
      Data_Item : Data(Payload_Length);
   begin
      Data_Item.Tag := Tag;
      Get(Data_File, Data_Item.Payload);
      Process_Data(Data_Item);
   end;
   ...
   exit when Whatever;
end loop;

(You'll need to work out your exit criteria.)

Data_Item will then be dynamically sized for the Payload_Length. Beware, though, if that length is odd, as padding may occur...


This situation is precisely what the attribute 'input is in the language for.

If you also own the code that writes that data out to the stream in the first place, then this is easy. Just use

Myobject : Discriminated_Record := Discriminated_Record'input (Some_Stream'access);

(and of course use 'output when writing).

If you must instead read in someone else's formatted data, it gets a wee bit more complicated. You will have to implement your own 'input routine.

function Discriminated_Record_Input 
    (Stream : access Ada.Streams.Root_Stream_Type'class) 
return Discriminated_Record;
for Discriminated_Record'input use Discriminated_Record_Input;

In your implementation of Discriminated_Record_Input, you can get around the discriminant issue by doing everything in the declaration section, or using a local declare block. (warning: uncompiled code)

function Discriminated_Record_Input
    (Stream : access Ada.Streams.Root_Stream_Type'class) 
return Discriminated_Record is

    Size : constant Natural := Natural'input(Stream);
    Data : constant Discriminated_Record_Input 
        := (Size, (others => Byte_Type'input(Stream));
begin
    return Data;
end Discriminated_Record_Input;

The main drawback to this is that your data may get copied twice (once into the local constant, then once again from there into MyObject). A good optimizer might fix this, as would adding lvalue references to the language (but I don't know if that's being contemplated).


Building on Marc Cs & TEDs answers, a couple of points to consider:

  1. As you have mentioned that this is an embedded system, i would also read the project requirements regarding dynamic memory allocation, as many embedded systems explicitly forbid dynamic memory allocation/deallocation. (check your implementation of ada.streams)

  2. The format you have described is reminiscent of Satellite Payload formats, am i correct? If so read the spec carefully as those 'sizes' are often more correctly termed 'maximum indexable offset from this point starting from 0', aka size -1.

  3. A Discriminent record is probably the way to go, but you may have to make it an immutable record with a length field (not the discriminant) to guarantee that your program will not allocate too much memory. Fix it to your 'reasonable max'.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜