Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Fares Mehanna

#1
General Development / Re: Apertus MLV Format Discussion
September 13, 2019, 04:23:23 PM
Thank you Ilia for opening the thread.

Sorry for my late replay. I was revise my knowledge about MLV.

I was working in the last 3 months in GSoC, I was working in implementing LJ92 in the FPGA for axiom beta and micro. I have a good knowledge about MLV during and before GSoC.

Luther is correct, Axiom can build its own hardware so possibilities are endless, but still our main priority is to be friendly as possible in the receiver end, since any heavy processing could limit the FPS.

Side info: currently, Axiom cameras do not store footage internally, so RAW data will be transferred and stored in a recorder. a computational friendly format will allow the recorder to use less power and to be more compact while handling more FPS.

Anyway, I will start by pointing to a few things Axiom currently need.

1. how bayer pattern is represented in the frame, for example if the sensor bayer pattern is RGGB, the data can be represented in the frame as
R G R G R G R G ...
G B G B G B G B ...
or
R G G B R G G B ...
R G G B R G G B ...
Currently Axiom frames are represented in the second style which is not supported in MLV.

2. HDR modes in Axiom beta include three modes
- Even and odd lines with different exposure times - not ISO as ML do.
- PLR mode which allow non-linear Raw data.
- In the future Axiom can utilize its high fps and global shutter to merge several frames together with different gain or exposure times.

The current "mlv_diso_hdr_t" expect dual iso only, but we need more information in all the three modes.
You can check Supragya Raj work to understand more about PLR.

3. It is not currently used in the RAW pipeline but it might be used in the future, log encoded pixels instead of linear values, check the work done on this area here Log Curve Simulation Paper, it might be implemented as alpha, beta and gamma or as LUT.

I believe that is what needed to currently store the RAW data out of the camera.
but in the future I think we might need much more possible ways to describe data in the frame, like if we are storing each channel separately, or if we slice the frame horizontally or vertically into several slices. those tricks can help in the FPGA side.

Those are what in my mind right now, I will keep researching in possible areas that we might need and I will also post this thread in Apertus irc so interested developers could contribute.

Quote from: Luther on September 12, 2019, 04:34:50 AM
using Zstandard instead of LJ92 could possibly offer smaller sizes, but would require FPGA programming

Interesting point, I was working in LJ92 since it is supported in MLV and DNG, and I did not really know about Zstandard, the concept of Asymmetric numeral systems is interesting. maybe someone in the future or me going to implement it (: