Message ID | 20220727103639.581567-1-tomi.valkeinen@ideasonboard.com |
---|---|
Headers | show |
Series | v4l: routing and streams support | expand |
Moi, On Wed, Jul 27, 2022 at 01:36:09PM +0300, Tomi Valkeinen wrote: > Hi, > > This is v12 of the streams series. The v11 can be found from: Thanks for the update. This continues to be a set that will need changes before merging, hopefully less so than in the past. But at the same time I think there are a number of patches early in the set that could be merged now. Specifically I mean patches 1--5 and 7. I'll pick those if there are no objections once we have rc1 in media tree. That should at least make it a little easier to work with the rest.
Hi, On 31/07/2022 23:47, Sakari Ailus wrote: > Moi, > > On Wed, Jul 27, 2022 at 01:36:09PM +0300, Tomi Valkeinen wrote: >> Hi, >> >> This is v12 of the streams series. The v11 can be found from: > > Thanks for the update. This continues to be a set that will need changes > before merging, hopefully less so than in the past. But at the same time I > think there are a number of patches early in the set that could be merged > now. Specifically I mean patches 1--5 and 7. > > I'll pick those if there are no objections once we have rc1 in media tree. > That should at least make it a little easier to work with the rest. I'm fine with that. Everything up to, and including, patch 9 are kernel-only patches, they don't change the uAPI. Tomi
Hi, On 03/08/2022 12:03, Milen Mitkov (Consultant) wrote: > Hey Tomi, > > thank you for providing v12 of the routing and streams support patches! > We are using v11 of these to add CSI virtual channels support to the > Qualcomm Titan platform (a.k.a qcom-camss driver) and will be moving to > the brand new fresh v12 now. > > I ponder several questions with regards to this newly added > functionality (not just v12, but all versions in general): > > 1. What are the main benefits of having multiple streams, that can be > enabled/disabled/configured, on the same pad vs. say having multiple > pads, each of which can be enabled/disabled/configured? The streams and pads are kind of similar, but also different. One difference is a conceptual one, that a pad represents some kind of physical connector, streams are logical "virtual" pads or connectors. But perhaps the main practical difference is that you had a hardcoded amount of pads, but the amount of streams is dynamic, adjusted based on the routing table. > As far as I understood, in the user-space world each stream on the end > of the pipeline gets exposed as a /dev/video%d node. Each node > corresponds to a videodev which is wrapped in a media entity that has a Yes. It would be nice to have a videodev that supports multiple streams, but at the moment there's no API for that kind of thing. Perhaps in the future. > single sink pad. So in the end, all streams end up as multiple "stream > -> pad" correspondences. I am sure there is some benefit of having > multiple streams vs having multiple pads, but I am struggling to put it > into exact words. Consider a bridge device with, say, 2 CSI2 inputs and 2 CSI2 outputs. The device can route streams from either input to either output, and possibly modify them (say, change virtual channel number). How many pads would have there? You would need a predefined amount of pads, one for each stream. So how many streams? What's a stream? With CSI2, we can easily define at least that we can have streams identified with the VC and the DT. So, 4 VCs at max, but there can be many DTs. I don't remember how wide the DT field is, but lets say 10 DTs. That would be max 40 streams per input. So the above device would need to have 40 * 4 pads to cover "all" possible uses. I say "all", because it's not clear how to define a stream. If the device in question can, say, split the incoming frame per line, and somehow output each of those lines separately, then, effectively, there would be as many streams as there are lines. That's a silly example, but I just want to highlight the dynamic nature of streams. > 2. What is your userspace modus operandi with regards to testing these I have my own python scripts built on top of kms++. They're really not in such a condition that I could share them to others. Maybe I'll at some point find the time to clean them up... > changes? For example, I figured out this much with regards to media-ctl: Yes, it should be all doable with media-ctl, v4l2-ctl. > media-ctl -R '"msm_csid0"[0/0->1/0[0x1],0/0->1/1[0x1]]' > > If I want to configure the CSI decoder subdev (msm_csid0) to receive 1 > stream on the sink pad and route it to 2 streams on its source pad. Is > my thinking correct? Yes, if your HW can do that. I don't have HW that can split (or clone) a stream, so it's possible that the use case doesn't work. > And I also wonder what is your preferred method to open each /dev/video > node in userspace concurrently? Are you, say, running 2 or more parallel > instances of yavta? I do it with my script, but yes, I think running multiple yavtas should do the trick. > 3. I assume, that with these changes, it's _strongly_ preferred that the > subdevice's active state management is left to the V4L2 API and not kept > internally like older drivers do? A subdev that uses routing/streams _must_ use the subdev active state. Tomi
On 05/08/2022 18:14, Milen Mitkov (Consultant) wrote: > >> If I want to configure the CSI decoder subdev (msm_csid0) to receive 1 > >> stream on the sink pad and route it to 2 streams on its source pad. Is > >> my thinking correct? > > > >Yes, if your HW can do that. I don't have HW that can split (or clone) a > >stream, so it's possible that the use case doesn't work. > > Now here's the main question. We use the CSI decoder (CSID) hardware to > split > one stream from the sensor into 2 or more streams based on datatype or > CSI virtual channel. > > Basically, the complete pipeline is something like this, for 2 virtual > channels: > > -> ISP line 0 -> videodev > /dev/video0 > / > sensor -> CSIPHY -> CSID -> > \ > -> ISP line 1 -> videodev > /dev/video1 > > > So my idea was to make the CSID subdevice multistream API compliant > (e.g. V4L2_SUBDEV_FL_STREAMS, manage the active state with the > V4L2 API v4l2_subdev_get_locked_active_state, take care of routing setup > etc.), > but keep the rest of the devices the way they are. That's now how the streams support has been designed. Your sensor provides two streams, and all the drivers that pass through multiple streams needs to be ported to streams API. So in your case, I believe everything but the "ISP line" needs to support streams. > The CSID subdev must take 1 stream on the sink pad and output on 2 > source pads. > > The routing configuration I use for the CSID subdev looks like this: > > media-ctl -R '"msm_csid0"[0/0->1/0[0x1],0/0->2/0[0x1]]' > > 0 - sink pad, 1 - first source pad, 2 - second source pad > > However, this routing setup fails with the validation in > v4l2_link_validate_get_streams(). > The logic there figures these are duplicate streams because because they > start at the same sink pad. > > To summarize my questions: > > 1. Is there some sort of restriction that the same sink pad can't be > used for more than 1 stream starting from it? In theory no, but it hasn't been tested. I think this case would mainly be cloning of the stream, not really splitting it. > 2. Is it ok to migrate only one subdevice to the multistream API > or should all possible subdevices in the pipeline be migrated? It's ok to mix streams and non-streams subdevices, but the non-streams subdevs must only use a single stream. E.g. You could have 4 non-streams cameras, each providing a single stream to a bridge. The bridge would support streams, and the bridge would send the 4 streams in a single CSI-2 bus. Now, that said, I don't think anything strictly prevents from supporting stream splitting, but as I mentioned above, it's not been tested or really even considered very much. It's also a bit ambiguous and unclear and I'd stay away from it if full streams support makes sense. I think if a source subdev (sensor) knows that it's providing multiple streams, then it should use streams API to provide those. I.e. if the sensor is providing different types of data, using VCs or DTs, then those are clearly separate streams and the sensor driver must be aware of them. Stream splitting might came into play in situations where the sensor provides just a single stream, but a bridge subdev splits it based on information the sensor can't be aware of. For example, the sensor provides a normal pixel stream, and the bridge subdev splits the frames into two halves, sending upper half to output 1 and lower half to output 2. Tomi
On Mon, Aug 08, 2022 at 09:45:38AM +0300, Tomi Valkeinen wrote: > On 05/08/2022 18:14, Milen Mitkov (Consultant) wrote: > > > >> If I want to configure the CSI decoder subdev (msm_csid0) to receive 1 > > >> stream on the sink pad and route it to 2 streams on its source pad. Is > > >> my thinking correct? > > > > > > Yes, if your HW can do that. I don't have HW that can split (or clone) a > > > stream, so it's possible that the use case doesn't work. > > > > Now here's the main question. We use the CSI decoder (CSID) hardware to > > split > > one stream from the sensor into 2 or more streams based on datatype or > > CSI virtual channel. > > > > Basically, the complete pipeline is something like this, for 2 virtual > > channels: > > > > -> ISP line 0 -> videodev /dev/video0 > > / > > sensor -> CSIPHY -> CSID -> > > \ > > -> ISP line 1 -> videodev /dev/video1 > > > > > > So my idea was to make the CSID subdevice multistream API compliant > > (e.g. V4L2_SUBDEV_FL_STREAMS, manage the active state with the > > V4L2 API v4l2_subdev_get_locked_active_state, take care of routing setup > > etc.), > > but keep the rest of the devices the way they are. > > That's now how the streams support has been designed. Your sensor > provides two streams, and all the drivers that pass through multiple > streams needs to be ported to streams API. So in your case, I believe > everything but the "ISP line" needs to support streams. To add a bit of information here, the important thing to understand is that streams and physical links are two different concepts. The above diagram describes the physical links (both outside the SoC, and inside it). Streams are carried by physical links, and a link can carry multiple streams (hence the name "multiplexed streams" used in this patch series). If the sensor outputs image data and embedded data with two CSI-2 DT on one VC, that's two streams carried over the sensor -> CSIPHY link, and the same two streams going over the CSIPHY -> CSID link. The CSID demultiplexes the streams, with one stream going to ISP line 0 and the other one to ISP line 1. As Tomi explained, every subdev that deals with multiple streams has to implement the new API. This includes, in this case, the sensor, the CSIPHY and the CSID. If the sensor were to output two images in different resolutions over two VCs, it would conceptually be the same, with two streams. If it were to output image data, embedded data and black level lines over with 3 DTs over one VC, that would be three streams. And so on. > > The CSID subdev must take 1 stream on the sink pad and output on 2 > > source pads. > > > > The routing configuration I use for the CSID subdev looks like this: > > > > media-ctl -R '"msm_csid0"[0/0->1/0[0x1],0/0->2/0[0x1]]' > > > > 0 - sink pad, 1 - first source pad, 2 - second source pad > > > > However, this routing setup fails with the validation in > > v4l2_link_validate_get_streams(). > > The logic there figures these are duplicate streams because because they > > start at the same sink pad. > > > > To summarize my questions: > > > > 1. Is there some sort of restriction that the same sink pad can't be > > used for more than 1 stream starting from it? > > In theory no, but it hasn't been tested. I think this case would mainly > be cloning of the stream, not really splitting it. > > > 2. Is it ok to migrate only one subdevice to the multistream API > > or should all possible subdevices in the pipeline be migrated? > > It's ok to mix streams and non-streams subdevices, but the non-streams > subdevs must only use a single stream. E.g. You could have 4 non-streams > cameras, each providing a single stream to a bridge. The bridge would > support streams, and the bridge would send the 4 streams in a single > CSI-2 bus. > > Now, that said, I don't think anything strictly prevents from supporting > stream splitting, but as I mentioned above, it's not been tested or > really even considered very much. It's also a bit ambiguous and unclear > and I'd stay away from it if full streams support makes sense. > > I think if a source subdev (sensor) knows that it's providing multiple > streams, then it should use streams API to provide those. I.e. if the > sensor is providing different types of data, using VCs or DTs, then > those are clearly separate streams and the sensor driver must be aware > of them. > > Stream splitting might came into play in situations where the sensor > provides just a single stream, but a bridge subdev splits it based on > information the sensor can't be aware of. For example, the sensor > provides a normal pixel stream, and the bridge subdev splits the frames > into two halves, sending upper half to output 1 and lower half to output 2. We've tested splitting on an i.MX8MP, with two different processing pipelines capturing the stream produced by a single YUV sensor, in different resolutions and formats. It works (or at least worked with v11 of the streams series, I'll update the code to v13 and retest).