Cbl 2026 05 06#7
Draft
raytiley wants to merge 79 commits into
Draft
Conversation
When for example a probe sends out an EOS event, we don't want to run the chain function with the buffer that triggered the probe.
When this property is enabled, ccextractor will expose caption pad on the first buffer even if caption meta is not attached in the buffer. This feature can be useful when caption branch needs to be strictly synchronized with video branch via pushed GAP event. For instance, application might want to produce text related data even if it doesn't exist (as a dummy or similar form). Live encoding would be a likely scenario. But by the nature of text, stream might not contain any caption data at the beginning then ccextractor cannot notify downstream of emptiness via GAP event since caption pad is not added yet.
... in order to make use of OS layer file I/O abstraction/optimiztion
... for user to be able to specify the size of read buffer. Depending on use cases, larger buffer size than default value can optimize the read pattern and it would be able to save the cost resulting from too frequent read access to the underlying storage hardware.
This parse element updates resolution with parsed value which can be different from upstream one but pixel-aspect-ratio is not updated. Updates pixel-aspect-ratio as well if resolution is updated since the upstream information is likely incorrect.
Use given DTS value as the PTS instead of accumulating frame duration. The frame duration accumulation will be reliable only if it's very accuration, and if not, that will cause out of sync issue.
Key differences from the old decklink plugin: * Single source/sink element instead of separate audio/video source/sink * Supports old driver (version 10.11 or newer) * Windows DeckLink SDK build is integrated into plugin build
... so that subclasses can push sticky-events dependent events
Adding GstAggregator based input-selector-like elements. streamselector element behaves similar to input-selector element, but active pad selection is done via "active" property of streamselector's sink pads. And input streams will be synchronized by using active pad's segment (same as "sync-mode=active-segment" of input-selector). In addition to streamselector element, streamselectorbin is added which consists of clocksync and streamselector elements. The streamselectorbin supports "sync-mode" property identical to that of input-selector.
... even when active pad is not ready. And send necessary stick event when active pad is changed even if the active pad does not hold buffer.
... so that the element can skip first N frames as requested.
Support restart streaming via signal action
Support automatic restart when frame dropping happens
Avoid heap allocation in capture thread
SDK 12.4.2 doc says if "sampleFramesWritten" is null, it's blocking operation finished once all data is written to driver. But the behavior is not mentioned in old SDK docs.
…ties Once large A/V desync or no-signal frames are detected, src will automatically restart streaming
instead, do auto restart on no-signal to signal state transition
Driver may wait for callback thread to join() but we takes the same lock there
This reverts commit 2e354f6.
Add a new source mode "running-time". This mode will convert buffer running time into timecode
There are some streams out there that only have valid content on a single line (i.e. a single field). Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7825>
Use requested video-format in case of auto detect mode as well
If detected format is compatible with previous format, do not restart stream
... instead of crashing
Introduce a new "decoder-factory-sort" signal, similar to the "autoplug-sort" signal in (uri)decodebin. This allows applications to reorder the list of candidate decoder factories based on their own preferences
Previously, videoaggregator enforced consistent input interlace-mode, but this makes little sense from a base class perspective because: * Subclasses (e.g. compositor) may perform rescaling or cropping. Even without rescaling, if the target position's vertical start offset is not aligned to an even number, blending already happens between mismatched field positions, resulting in suboptimal image quality. Therefore, enforcing a consistent interlace-mode provides little real benefit. * For best image quality, blending should always occur in progressive mode, though performing pre-deinterlacing can either be the user's responsibility or be handled internally by subclasses if desired. Similarly, input formats with an alpha channel are now allowed even when the output format does not have one. Filtering out alpha-capable input formats in such cases is unnecessary, since subclasses like compositor and d3d12compositor can already handle this properly.
By this commit, overlaycomposition will be able to blend upstream overlay-compositions with video frames if downstream does not support the meta. Summary of changes * Port to basetransform * Notify upstream that overlay-composition meta is always supported by this element during negotiation. Then decision whether blending is needed or not will be made by this element
Prefer composition meta
We need both overlay composition meta added/removed caps.
Adding new mode to ignore GST_FLOW_FLUSHING
Collaborator
Rebased that already and it's in this branch now. Should be good to go |
raytiley
commented
May 6, 2026
It should be super low-risk to add an extra KB to the stack at all call-sites.
The return value of snprintf will let us know that already. %n is only needed for ancient snprintf implementation that didn't do that. We do not support any of those. This allows us to use snprintf instead of sprintf which is marginally faster.
This implementation is 66% faster (0.33x time) than the current variant. We call GST_TIME_ARGS one very single log message and it adds up quickly. This reduces logging time by 30% (0.7x time) for a GST_INFO() macro.
When setting GST_DEBUG_FILE, the file was opened with fopen() and written to with fwrite(). This required an expensive fflush() on Windows because fwrite() doesn't support line-buffered I/O. Bypass the CRT userspace buffering by submitting whole lines in a single write to the kernel. This is equivalent to line-buffered FILE writes, and is about 12% faster.
Otherwise the object could get disposed while the task function is still running, leading to crashes and worse. While this technically creates a reference cycle in most cases, this is not actually a problem because the task has to be stopped before the element and pad can get disposed and anything else leads to exactly the problem this is solving.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
@sdroege - this is the new branch i'm going to try building, however like I mentioned in slack it's missing:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7388 due to a conflict. Could you or @seungha-yang rebase that and maybe get it in?