receiver/azuremonitor: fix StartTimestamp > Timestamp in batch API path#47749
Open
Sanil2108 wants to merge 1 commit intoopen-telemetry:mainfrom
Open
receiver/azuremonitor: fix StartTimestamp > Timestamp in batch API path#47749Sanil2108 wants to merge 1 commit intoopen-telemetry:mainfrom
Sanil2108 wants to merge 1 commit intoopen-telemetry:mainfrom
Conversation
In the use_batch_api path, ts comes from metricValue.TimeStamp (Azure's aggregation window end time, potentially hours in the past). However, AddDataPoint was setting dp.SetStartTimestamp(mb.startTime) where mb.startTime is the scrape time (now), resulting in StartTimestamp > Timestamp which violates the OTel data model. Fix by using ts for StartTimestamp so it is always equal to Timestamp. This ensures the constraint StartTimestamp <= Timestamp is always met. The existing tests used IgnoreStartTimestamp() and did not catch this. A new assertion is added to metrics_test.go to explicitly verify that StartTimestamp equals the data point Timestamp, not mb.startTime. Fixes open-telemetry#47705
Contributor
|
Welcome, contributor! Thank you for your contribution to opentelemetry-collector-contrib. Important reminders:
A maintainer will review your pull request soon. Thank you for helping make OpenTelemetry better! |
Member
celian-garcia
left a comment
There was a problem hiding this comment.
Oh thank you for referencing the data model spec. Reading it I'm realising that it makes no sense in that case to ever have a separate start timestamp indeed. Azure Monitor metrics are all gauges. No counters, no histograms,...
We could even remove it but I think I like to see it explicitly.
Member
|
I don't have the right to approve the workflow but I know already that it won't pass. This should contain a changelog. This is described on the contributing guidelines, can you have a look ? |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #47705
Problem
When
use_batch_api: true,processQueryTimeseriesDatainscraper_batch.gosets the data point timestamp (ts) frommetricValue.TimeStamp— Azure's aggregation window end time, which can be up to an hour in the past.However,
AddDataPointininternal/metadata/metrics.gowas calling:This results in
StartTimestamp > Timestamp, which violates the OTel data model requirement thatStartTimestamp <= Timestamp. Downstream consumers such as Prometheus remote-write reject these data points.Fix
Use
tsforStartTimestampso it always equalsTimestamp:For the non-batch scraper,
ts = time.Now()so the behaviour is unchanged (StartTimestamp ≈ Timestamp ≈ now). For the batch scraper, StartTimestamp now correctly equals the aggregation window timestamp instead of the scrape time.Testing
metrics_test.gothat explicitly verifiesStartTimestamp == Timestamp(the existing tests usedIgnoreStartTimestamp()and did not catch this regression).