Share your experience!
Depth map generation for better Autofocus and also for vfx compositing purposes: optical flow, SFM: Structure-From-Motion, MVS: Multi-View Stereo (to map 3d data continuously/reconstruct environment for better AF - or even make 3d models?) and or -> Lidar and TOF sensors tech. and Artificial Intelligence/Neural engines to detect depth (for AF - like midas(?) - this "Monocular Depth Estimation" or this technical paper from "Consistent Video Depth Estimation" (?) how about "LeReS" - "learning to recover 3d scene shape from a single image") and recording this data into depth map (greyscale video - "height map"/depth map data) - maybe also velocity map (optical flow data) + gyro/angular velocity sensors for recording motion data (for better post production stabiliaztion).
These are thing that would be interesting to see in alpha cameras for still and professional video production (for post processing purpose/task -> vfx task) - also light field and global shutter would be needed - especially in these high resolution cameras like 8k or 12k it would be necessary to have global shutter and frame rate should be always atleast 60fps for better and more accurate motion capture (vfx / compositing purpose - higher frame rate is always better and 120fps is already pretty good most motion capture tasks especially when used depth aware frame interpolation and this why things i mentioned first would be necessary - depth map - which could be also used for better autofocus which is sony's weakest point). Sure there is many ohter areas like (5-axis inbody) IBIS + OIS sync and other techs. that would need improving if sony wants to be best in all video aspects / area.
Hey jmukari,
Thank you for your suggestions, and we've passed it on to the proper channels.
Sony Support.
Naznačite sviđanje na Facebooku
Pretplatite se na YouTubeu