Just a heads up, it seems the ML issue is still there - https://ml.whitebox.aero/archives/list/dev@ml.whitebox.aero/thread/RE7TNLBLO..., so this reply will end up on the ML but probably in a separate (new) thread. Thanks, I'll get back with the results when I try it out On 9. 3. 2026. 23:43, Lynne wrote:
Hi,
Thanks for sorting the email issue out.
I think you should definitely rerun the test on 6.13 (I presume this uses the mesa panfrost/panvk driver) and use the latest FFmpeg git master. 8.1 which will ship next week has a ton of improvements.
Just in case, could you post the log by adding -trace at the start?
On 09/03/2026 22:26, milos@whitebox.aero wrote:
Hey Lynne,
Your emails should go through instantly now.
I've tried running that filter, but I couldn't get it working, neither as part of our current pipeline, nor standalone (using only the filter from your sample). Pasted my results here: https://gitlab.com/whitebox- aero/whitebox/-/snippets/5967510
I see that Vulkan has had considerable driver improvements in kernel 6.13, and we're currently running on 6.1 - would upgrading be a requirement to running this (whether Vulkan, or the latest FFmpeg version in some way)? I'll try to get up to speed with the driver replacements later this week to verify that as well.
@Xavier @Avinash Does any of this ring any bells, have you maybe tried using Vulkan on OPi before? It seems there's also Armbian for Orange Pi 5 Plus, but they ship it with 6.1 and 6.11, so I guess that wouldn't really bring benefits here
On 6. 3. 2026. 16:43, milos@whitebox.aero wrote:
Hey, I did receive it on my email, so I assume the ML also got it, but I think it requires approval to be posted on there. I don't have access to that, so when Xavier comes around he'll do it. I've sent him an email. I could technically just reply to your email here, but I think that'd open a new thread.
Also thank you for the test instructions! I'll run them over the weekend and let you know how it went
On 6. 3. 2026. 14:51, Lynne wrote:
Was my reply received? I don't see it on the ML.
On 06/03/2026 13:28, Lynne wrote:
Hi all,
Certainly, a Vulkan version would likely speed the pipeline up. GPUs are good at interpolating, and very large amount of the time spent on the CPU version of the filter is spent doing that.
Would you be able to test whether Vulkan is setup correctly? Try adding: `hwupload,avgblur_vulkan=planes=0,hwdownload,format=yuv420p` after the v360 filter. The output should not change, and the slowdown shouldn't be too significant. You need to use current git master of FFmpeg, not any release branch.
I highly suggest using the panfrost Vulkan drivers from Mesa, rather than some outdated vendor-provided inactively developed drivers. They support RK3588, but you'll need to look into how to enable them (they require using the mainline kernel panfrost module, rather than mali).
The previous work done by Paul isn't really helpful, as we're currently rewriting all the Vulkan codecs and filters to compile- time SPIR-V in order to eliminate dependencies on large external libraries, and to speed up the code with offline optimizations. I'd be interested in being contracted to write a modern Vulkan filter with enough capabilities for your particular use-case, if you agree.
On 06/03/2026 12:37, milos@whitebox.aero wrote:
Hey Lynne,
Adding our conversation from IRC:
> <libmilos-so> Hello Lynne, I'm Milos, contributor to https:// > whitebox.aero . We're looking for a FFmpeg developer for > assistance with filter development, and Paul B Mahol gave us > your contact as an expert in Vulkan development. We are building > an SBC-based device which needs to run the v360 filter on it, > however the CPU filter's performance is starting to be our > bottleneck, and we're looking to explore a GPU-based approach to > it. Paul started working on `v360_{opencl,vulkan}` filters a > while back, but it wasn't finished, and the `libavfilter` has > changed since, requiring more development to get them working. > Here's a link to the mailing list's previous discussion: > https://ffmpeg.org/ pipermail/ffmpeg-devel/2024- June/330229.html > > > <libmilos-so> Our use case is to convert dual-fisheye videos to > equirectangular in real time, we're basically only doing this > for now: `v360=dfisheye:e:ih_fov=193:iv_fov=193` > > <libmilos-so> Currently we are running it for a single video > feed, but we're close to hitting resource limits. As we'd like > to extend to multiple videos, we think that GPU-based > acceleration would provide us the performance improvement we > need. Considering your FFmpeg expertise, we were wondering if > you'd be interested for contracting to help us get this > implemented for our use case? > Our mailing list is at https://ml.whitebox.aero, and if that'd > be alright with you, I'd like to copy the contents of this > conversation to the mailing list later. If you're interested in > the contractual work, it'd be great if we could then continue > the conversation on the mailing list to keep the team in the loop > > <libmilos-so> Thanks! > > <Lynne> sure, I'd be happy to help > > <libmilos-so> Thanks! Could you give me your email address and > I'll cc you on the thread? > > <Lynne> the filter which paul wrote years ago needs quite a bit > of work, since currently we're rewriting all vulkan filters to > compile- time spir-v generation > > <Lynne> sure, [redacted] > For context, we are taking H.264/H.265 dual-fisheye videos from Insta360 cameras, both from live feeds, and when recorded from file recordings, and processing them using FFmpeg to produce equirectangular videos which would then be suitable for playback in browsers. We are currently using Pannellum player for this, given our current equirectangular format. If there are more efficient options on the backend side to achieve the same effect, we're open to exploring those as well, although frontend clients would also be phones and tablets, so no heavy operations should take place on them.
We are currently developing on RK3588 (on Orange Pi 5 Plus), which by the documentation supports OpenCL 2.2 and Vulkan 1.2. It has H.264 and H.265 hardware encoders and decoders, which we've tried using, but v360 filter ended up being the biggest bottleneck.
Here's one example of how we're currently using this filter:
> return [ > "ffmpeg",# Decoder settings > "-fflags","+genpts+discardcorrupt", > "-flags","low_delay", > > "-f","h264", > "-i","pipe:0",# input from stdin "-r","30", > > # Filter settings "-filter_complex", > f"[0:v]scale=1440:720,v360=dfisheye:e:ih_fov=193:iv_fov=193[out]", > "-map","[out]", > > # Encoder settings "-c:v","libx264", > "-preset","ultrafast", > "-tune","zerolatency", > "-pix_fmt","yuv420p", > "-profile:v","main", > "-level","3.1", > > # Bitrate settings # This ensures that keyframes are pushed > frequently enough to keep the # streaming latency low > "-g","30",# keyframe interval "-keyint_min","30",# minimum > keyframe interval "- sc_threshold","0",# disable scene change > detection "- muxpreload","0", > "-muxdelay","0", > > "-f","flv",# output format "-rtmp_live","live", > > # e.g.: rtmp://srs-dev/live/livestream output_target, > ]
By my understanding, we likely need a GPU-based v360 filter as a drop- in replacement for the standard CPU-based v360. I think that the GPU- based filters would also require `hwupload`/ `hwdownload`(?), but I don't have experience with it. Personally I'm new to FFmpeg, so feel free to correct me if something sounds off.
The code would ideally be released under a copyleft license, I'll let Xavier pitch in on this.
Does this sound like a proper approach to it and something that'd be interesting for you to work on?
Thanks!