The checksum is the
xor of the bytes between the start and end markers, and is sent as a hex encoded value after the end marker. Like this:
$GPGSA,A,3,11,20,01,17,,,,,,,,,7.0,2.4,6.6*36\r\n
$ being the start marker * being the end marker and 36 hex being the checksum.
Last time I made the decision to match on the start of the message like:
case Data of
<<"$GPGSA,", Rest/binary>> ->
getparams(Rest);
_ ->
io:format("~p~n", [Data])
end,
This was actually not a good decission, because the NMEA messages are so alike I find it better to simply match on the start marker and receive a list of parsed values (tokens).
So my new
case looks like this:
case Data of
<<"$", Rest/binary>> ->
Tokens = tokenize (Rest),
io:format("~p~n", [Tokens]);
_ ->
sl:setopt(Port, mode, line),
sl:update(Port)
end,
Now the second _ clause should never match, but when you call the
stop/0 and then start/0 funktions the sl module sometimes (read bug) looses the information that line mode was requested, in these cases this clause will match and I force the line mode on the Port, which works around this.Now to split the received binary into tokens I call
tokenize/1, now I was hoping to keep the data in binaries and build a list of these, but constructing binaries by continuously appending bytes to them is not supported in erlang, so for a first go I will turn the binary into a list of strings (a possible solution to my first wish is to scan the binary, record the comma positions and use the split_binary built in function, I will try this later and compare the performance implications).Now the
tokenize will have two missions: tokenize the binary and verify the checksumFirst
tokenize/1:
tokenize(<<$G:8, Rest/binary>>) ->
tokenize(Rest, [$G], [], $G).
Here I use that all NMEA messages starts with "G", so I can initialize the current token (second parameter) with a
$G ($ is Erlangs syntax for characters) and the calculated checksum to the same value (fourth parameter) and the third parameter is the current list of parsed tokens which start out empty.Now
tokenize/4:
tokenize(<<$,:8 , Rest/binary>>, Token, Tokens, Sum) ->
tokenize (Rest, [], [lists:reverse(Token) | Tokens], Sum bxor $,);
tokenize(<<$*:8 , Rest/binary>>, Token, Tokens, Sum) ->
checksum (Rest, [lists:reverse(Token) | Tokens], [], Sum);
tokenize(<>, Token, Tokens, Sum) ->
tokenize(Rest, [N | Token], Tokens, Sum bxor N).
The algorithm is simple when a comma is first in the binary, a token is finished so it is added to the list of parsed tokens (as I add characters at the beginning of the list in usual Lisp style, the
lists:reverse needs to be called) and $, is added to the checksum. When a * is first we move on to parse the checksum, in all other cases the first byte in the binary is added to the current token and the checksum.Now the checksum needs to be parsed and verified, the code below does this, note that the first clause with the empty binary should in theory never match, but again in some circumstances (read stop/start again) this can happen and this takes care of that. Otherwise it is pretty straight forward, the read checksum is converted to its numerical value using
{_,ReadSum,_} = io_lib:fread("~16u",Chk),:
checksum(<<>>, _, _, _) ->
{error};
checksum(<<$\r:8, Rest/binary>>, Tokens, Chk, Sum) ->
checksum(Rest, Tokens, lists:reverse(Chk), Sum);
checksum(<<$\n:8>>, Tokens, Chk, Sum) ->
{_,ReadSum,_} = io_lib:fread("~16u",Chk),
case lists:nth(1,ReadSum) of
Sum -> lists:reverse(Tokens);
_ -> {error}
end;
checksum(<>, Tokens, Chk, Sum) ->
checksum(Rest, Tokens, [N | Chk ], Sum).
Thats it, we get the binary split into a list of tokens, next I will turn these lists into more structured records.
No comments:
Post a Comment