@@ -51,10 +51,10 @@ ID do not need to be serialized consecutively.
5151
5252### Deserialization Rules
5353Because all the fields will be decoded in data type version order, the deserialization will
54- simply read the encoded input until the end of the input or until the first unknown field_id.
55- Implementations MAY pass on any fields that they cannot decode, when possible (by passing-through
56- the whole opaque tail of bytes starting with the first field id that the current binary does not
57- understand).
54+ simply read the encoded input until the end of the input or until the first unknown field_id. An
55+ unknown field id should not be considered a parse error. Implementations MAY pass on any fields
56+ that they cannot decode, when possible (by passing-through the whole opaque tail of bytes
57+ starting with the first field id that the current binary does not understand).
5858
5959### How can we add new fields?
6060If we follow the rules that we always append the new ids at the end of the buffer we can add up
@@ -135,6 +135,14 @@ https://developers.google.com/protocol-buffers/docs/encoding#varints.
135135 * ` tag_key ` is ` tag_key_len ` bytes comprising the tag key name.
136136 * ` tag_val_len ` is a varint encoded integer.
137137 * ` tag_val ` is ` tag_val_len ` bytes comprising the tag value.
138+ * Tags can be serialized in any order.
139+ * Multiple tag fields can contain the same tag key. All but the last value for
140+ that key should be ignored.
141+ * The
142+ [ size limit for serialized Tag Contexts] ( https://github.com/census-instrumentation/opencensus-specs/blob/master/tags/TagContext.md#serialization )
143+ should apply to all tag fields, even if some of them have duplicate keys. For
144+ example, a serialized tag context with 10,000 small tags that all have the
145+ same key should be considered too large.
138146
139147## Related Work
140148* [ TraceContext Project] ( https://github.com/TraceContext/tracecontext-spec )
0 commit comments