Fix int overflow in tensor byte-size calculation#3551
Open
Forbiddem wants to merge 1 commit into
Open
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
`BytesRequiredForTensor`, `TfLiteEvalTensorByteLength`, and
`AllocateOutputDimensionsFromInput` in `tensorflow/lite/micro/memory_helpers.cc`
all multiplied tensor shape dimensions into a signed 32-bit `int element_count`
running product before assigning the result to a `size_t` byte count. Shape
dimensions and the element type size are sourced from the (untrusted)
flatbuffer model, so a model with a shape such as [65536, 65536] over a
4-byte element type produces an element-count product of 2^32, which wraps to
0 in the original code and yields `*bytes = 0`. Subsequent buffer
arithmetic in the allocator then operates on a tensor whose advertised size
is 0 while its dimensions imply ~17 GiB of storage, opening a memory
corruption surface.
This change:
* Performs the running product in `size_t`, matching the result type, and
rejects negative shape dimensions with `kTfLiteError`.
* Guards every multiplication (shape * shape and elements * type_size)
against overflow of `size_t` and returns `kTfLiteError` when the next
multiplication would wrap.
* Propagates the previously discarded status of `TfLiteTypeSizeOf` in
`AllocateOutputDimensionsFromInput`.
Adds three regression tests in `memory_helpers_test.cc`:
* Shape [65536, 65536] float32 reports the correct 17,179,869,184 byte
length on platforms with a 64-bit `size_t` and `kTfLiteError` on
platforms with a 32-bit `size_t` (instead of the pre-fix value of 0).
* A shape that overflows even a 64-bit `size_t` is rejected with
`kTfLiteError`.
* A negative shape dimension is rejected with `kTfLiteError`.
All existing `memory_helpers_test` cases continue to pass.
82a8d21 to
20b8aaf
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BUG=#3552
Summary
BytesRequiredForTensor,TfLiteEvalTensorByteLength, andAllocateOutputDimensionsFromInputintensorflow/lite/micro/memory_helpers.ccall multiply tensor shape dimensionsinto a signed 32-bit
int element_countrunning product before assigning theresult to a
size_tbyte count. Shape dimensions and the element type sizecome from the (untrusted) flatbuffer model, so a model with a shape such as
[65536, 65536]over a 4-byte element type produces a product of 2^32, whichwraps to 0 in the original code and yields
*bytes = 0. Subsequent bufferarithmetic in the allocator then operates on a tensor whose advertised size
is 0 while its dimensions imply ~17 GiB of storage, opening a memory
corruption surface when a model is loaded from an untrusted source.
This PR:
size_t, matching the result type, andrejects negative shape dimensions with
kTfLiteError.shape * shapeandelements * type_size)against overflow of
size_tand returnskTfLiteErrorwhen the nextmultiplication would wrap.
TfLiteTypeSizeOfinAllocateOutputDimensionsFromInput.Test plan
make -f tensorflow/lite/micro/tools/make/Makefile memory_helpers_testbuilds clean.
gen/linux_x86_64_default_gcc/bin/memory_helpers_test— all 10 casespass, including the 3 new regression tests:
TfLiteEvalTensorByteLengthDoesNotTruncateAcrossInt32Boundary— shape[65536, 65536]float32 now reports17,179,869,184bytes on 64-bitsize_tplatforms (previously:0), andkTfLiteErroron 32-bitsize_tplatforms.TfLiteEvalTensorByteLengthRejectsSizeTOverflow— a shape thatoverflows even a 64-bit
size_tis rejected withkTfLiteError.TfLiteEvalTensorByteLengthRejectsNegativeDimension— a negativeshape dimension is rejected with
kTfLiteError.Happy to switch the manual
>checks to__builtin_mul_overflowif that'spreferred for this tree.