Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - mkrjf

#1
Would the 1D-C support a ML like hack?
If new 5DMk4 with video enhancements comes out, would it likely support hack?
#2
General Help Q&A / 5DMk3 1.3.3 firmware
March 02, 2015, 05:02:56 AM
My 5DMk3 was serviced at Canon Professional Services two weeks ago and several circuit boards were replaced to correct the camera not always powering up when switched on.
They did not provide details, but the work order stated they replaced several circuit boards.
Tried to run new version of ML targeted at 1_3_3 (thanks Chris!!) but apparently the boot flag that used to be set in my camera was reset (or maybe the circuit board with eeprom? storing the flag was replaced). The techs did not mention anything about ML and of course I removed the ML SD card prior to service.
So the current boot flag setting program will not run since the camera detects it is older than 1.3.*

Am I the only one in this situation? Forum search for 1.3.3 did not return anything.

It has been suggested that if I manually downgrade then flip the bit with boot flag program and then upgrade it will preserve the flag state and I will be good to go.
Can anyone confirm this? And couldn't the version embedded in the .fir be changed to be 1.3.* even if done with a hex editor on the binary?
Thanks,
Mike
#3
Is it possible to build on MAC for 5dmk3 raw code base?
I have asked previously but response has only been PM from others who have tried but not succeeded.
Thx yet again for ML
Mike
#4
I am installing ML beta for firmware 1.3.3 after my camera was factor updated by Canon.
Do I need to use a different boot loader flag setting file than listed here:
http://www.magiclantern.fm/forum/index.php?topic=5520.0
I need to do this whenever camera firmware changes, right?
Thx
Mike
#5
So my 5D Mk3 started to not turn on once in a while and I took it into Canon Professional Services - I asked if they could leave firmware version untouched and they said now step one in any service is to upgrade firmware.
Needless to say I now have 1.3.3 firmware installed and looks like no ML will run on that.
Will new version of ML be available of 1.3.3 anytime soon?
Otherwise I can't use camera for digital film which is main reason I still have it :(
#6
very nice - Gold Star
Guess Some very large light over the water creating a pool of light for the swimmer to work in?
#7
Share Your Videos / Re: Girls day & night - 5D Mark III
January 25, 2015, 04:40:12 AM
first clip (fashion show) looks way over processed - did you do really heavy noise reduction? looks almost green screen
high chroma works okay on 3rd to last but on the first that combined with (I guess) heavy smoothing looks jarring (to me)
#8
like
#9
Share Your Videos / Re: Moscow. September
January 25, 2015, 04:08:48 AM
nice
sure we could all use more professional equipment ;)
I was noticing more the dark tone of the day scenes - was that intentional to keep darker tone throughout?
Anyway show us more of Moscow ;)
#10
What lens and settings did you use (aperture, iso, shutter speed)? Color grading software? What did color space did you output to? Do you have a calibrated monitor? if Rec709 there would be information loss (likely darker).
Too dark is relative. I suspect performance should be better than GH4 just on sensor size and raw (unless GH4 external recorder).
Anyway thanks for sharing - night time out doors with video projection seems like a difficult subject regardless.
I have used f1.5 Rokinon for night shoots and can get a lot of information out of the blacks. But for my f4 lens it would unpleasantly noisy.
I am studying Japanese and should have known the Disney characters would be speaking japanese but still pleasant surprise ;)

I suggest trying to photograph a person next to a strong candle light as a test. Too dark is not usually the issue but rather too much noise in the darks.
ありがとうございました
Mike
#11
I am glad to test / support for 5DMk3 firmware 113
I am using the older version of ML currently outputting .raw file
Switched from raw2dng app to MLRawViewer
thanks very very much for reading our minds
So nice to preview before grinding out the transcode
If you can provide link / process for building on OSX (mavericks apple dev env) I am glad to build and give feedback.
One thing I noticed is that the same footage converted using raw2dng and mlrawviewer - the mlrawviewer with sRGB out (and sRGB or AdobeRGB - I forget) elected in camera (and neutral camera color style) - the MLRawViewer converted skin tone seems just so slightly greener. No one would notice except I was editing and had both clips loaded for a cut between.
Not sure color science for raw2dng was documented - and there was no pick list. But maybe you could provide thoughts. Raw2dng did have a pink problem at one point so maybe that clip was pinker as opposed to MLRawViewer being greener. No idea.
Also, can you support AdobeRGB? I usually select that in camera and for photography it definitely covers a larger color space. I would like to do no color transformations before post - keep it all AdobeRGB if possible. (I post photos to AdobeRGB and film to P3 with a calibrated monitor). It sound like you have access to LUT for conversions - home grown? Just ask as most post houses don't share that info (secret sauce).
Lastly, can you confirm the bits of significant data in and out for MLRawViewer? In another post you mention prores 4:4:4 is 10 bits depth, so you are doing simple truncation of low order bits?
When outputting to DNG - you are outputting 14bits of data into 16bit words?
Have you found that the ML Raw output actually uses 14bits? Any idea if older version (right before HDR version and new file format) and newest version provide full 14 bits of data?
Curious to quantify what is actually the input to color grading when using ML and MLRawViewer with both ProRes4:4:4 and DNG flows.
Mike
#12
Shot indie short on 5DMk3 with ML hack last weekend !yay!
But I can only convert some of the footage to prores444 .mov using raw2dngapp and raw2dng on Mac OSX (mavericks)
Sometimes the captured footage is one large raw file > 4GB other times it is a set of files (001, 002).
Cat method of just recombining files seems to work.

For some of the original one file RAW files raw2dngapp creates DNG files but no prores .mov. (see below)
Whole reason for me to use is to get the .mov foe editing. Would rather not have to edit with .dng.
I suspect the 'error' messages are invalid as all files have identical framerate and some process completely and others don't.

BTW can anyone confirm output is true prores 4444? Some applications report the output is prores4444 but others show as prores422 (who knows which program is correct) (using quicktime viewer, VLC, and mpeg stream clip)

Can someone confirm what FFMPEG command and what version of FFMPEG so I can repeat manually?
Only the ffmpeg output is shown not the cmd executing (maybe that could be echoed in next version?)
Sometimes I get a cryptic error: maxvalue=0.
Also - raw files smaller than a certain size always fail (not the issue here, just fyi).

raw2dng converter GUI for OsX
Beta ver.0.13

Wish raw2dng showed version! just shows usage

So, for example a 12.15 GB single raw file was captured for one take and raw2dngapp successfully converted and created 3.24 GB prores444 .mov - fabulous.

But for a 5.58 GB single raw file it created 1536 dng (not sure if it created all successfully) but then failed to create .mov - see error below (but sometimes there is error/warning of maxvalue=0)

raw2dng converter GUI for OsX
Beta ver.0.13

M13-1450 File Supported
Generating ProResHQ 4444 with FPS: 0.000
ffmpeg version 1.2.1-tessus Copyright (c) 2000-2013 the FFmpeg developers
  built on May  9 2013 21:58:14 with llvm-gcc 4.2.1 (LLVM build 2336.1.00)
  configuration: --prefix=/Users/tessus/data/ext/ffmpeg/sw --as=yasm --extra-version=tessus --disable-shared --enable-static --disable-ffplay --enable-gpl --enable-pthreads --enable-postproc --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-libspeex --enable-bzlib --enable-zlib --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libxavs --enable-version3 --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvpx --enable-libgsm --enable-libopus --enable-fontconfig --enable-libfreetype --enable-libass --enable-filters --enable-runtime-cpudetect
  libavutil      52. 18.100 / 52. 18.100
  libavcodec     54. 92.100 / 54. 92.100
  libavformat    54. 63.104 / 54. 63.104
  libavdevice    54.  3.103 / 54.  3.103
  libavfilter     3. 42.103 /  3. 42.103
  libswscale      2.  2.100 /  2.  2.100
  libswresample   0. 17.102 /  0. 17.102
  libpostproc    52.  2.100 / 52.  2.100
[image2pipe demuxer @ 0x10201f400] Could not parse framerate: 0.000.
pipe:0: Invalid argument
Done

If there are newer versions pleas point me to. I know I spark the 'didn't you google for t^%&^% to get the already posted answer?' but sometimes what to google for is the issue ;)
Thanks in advance
Eager to brag about the value provided by ML for indie filmmakers!!
Mike
#13
The following has before and after of 5DMk3 raw.
Used older version of ML that outputs raw and then converted with new raw2dng that support prores4444.
When filming I made effort to not overexpose, but prores44444 initially appeared overexposed - guess that is just how raw2dng converts.
Is it intended to output linear? I was expecting log - maybe we could have an option for that?

Just imported prores4444 into Speedgrade and adjusted exposure, saturation, black levels quickly to eliminate hot spots on white costumes.
So there was at least 10bits of information captured!
Used linear and 5DMk3 profile settings in speedgrade.
Curious to know if others have speedgrade settings recommendations or custom profiles.
Will also try with davinci.

https://plus.google.com/photos/109692674038873146393/albums
#14
So assuming all the readers of this forum know how to use goole and can read...
Just trying to fit in with the 'tone' of this thread ;)

http://documentation.apple.com/en/finalcutpro/professionalformatsandworkflows/index.html#chapter=10%26section=2%26tasks=true
"Apple ProRes 4444
The Apple ProRes 4444 codec offers the utmost possible quality for 4:4:4 sources and for workflows involving alpha channels. It includes the following features:

Full-resolution, mastering-quality 4:4:4:4 RGBA color (an online-quality codec for editing and finishing 4:4:4 material, such as that originating from Sony HDCAM SR or digital cinema cameras such as RED ONE, Thomson Viper FilmStream, and Panavision Genesis cameras). The R, G, and B channels are lightly compressed, with an emphasis on being perceptually indistinguishable from the original material.

Lossless alpha channel with real-time playback

High-quality solution for storing and exchanging motion graphics and composites

For 4:4:4 sources, a data rate that is roughly 50 percent higher than the data rate of Apple ProRes 422 (HQ)

Direct encoding of, and decoding to, RGB pixel formats

Support for any resolution, including SD, HD, 2K, 4K, and other resolutions

A Gamma Correction setting in the codec's advanced compression settings pane, which allows you to disable the 1.8 to 2.2 gamma adjustment that can occur if RGB material at 2.2 gamma is misinterpreted as 1.8. This setting is also available with the Apple ProRes 422 codec."

So let's rephrase the question:
given the sensor data that is present in a ML 5DMk3 raw video output file:
1) what does the current code (rawmagic or raw2dng followed by FFMEG or Davinci or whatever) do to recombine the collection of RGGB picture elements into a 'correctly' balanced (across channels) RGB (or YUV or whatever colorspace is used)?
2) when it is all said and done - how much of the original information actually made it into the output file? No way it is 100% or unshifted in color
3) where in the sensor did the data come from and is it really correlated at the pixel level with the light field presented to the sensor? Zoom mode 1:1 vs. normal subsampled sensor, for example

And how do we really know that is true? Forget what you see on your 10 bit rec709 HP dream color or 12 bit P3 calibrated barco -
has anyone written a program and fed test data into the process with a ramp or similar to cover the entire dynamic range that the sensor could generate and then see what is there after your post processing steps.

You can even just do it for luminance with no color and verify what happens to the green channels. Take a video of solid green and see what you get. But some science, please.

I have done some tests with sunsets and very dark to very bright - and all I know is it is noticeably less color depth and 'accuracy' then Red Camera using identical canon lenses and post processing (pro-res4444 and apple color or davinci or premiere or dng from either raw2dng or rawmagic). Granted the sensor crop and de-bayer, etc. is completely different.

So I challenge you - prove your claims by demonstration rather than conjecture.
Thx Mike

#15
It seems the latest version of raw2dng is .13 - but it also seems to be older than latest builds of ML.
Renato mentioned in another forum (or responded to someone) that there are several identically labeled versions of raw2dng.
Really? How would we know which is which?
And better yet someone posts they finally found the newest one but provided no link or details.
Please...
I also am finding that it seems to be outputting 422 not 444. Asked once or twice if others could confirm and I only see a few others claiming output is 422.
If you know where the answers are in the forum please post or private message me.
Thx
Mike
#16
I did post process.
I included before and after as well as raw video of same scene with no dual ISO.
There are probably too many images to look at - I will simplify and repost.
But basically jpeg from camera shows no banding and ml raw and dual ISO show banding. Dual ISO does not show much improvement in dynamic range from raw video (no dual ISO).
But like I said I certainly could have done something wrong. There are no problems with latest version of ml or the 'process' described for dualiso on Mac?
Thx
#17
I have used the 'process' referenced here and elsewhere for DualIso 5DMk3 latest build and compared to plain ML Raw and also camera jpeg

If you look at some sample output from a low light test (my kitchen with a few practicals):
https://plus.google.com/photos/109692674038873146393/albums/5924621701487097649
Sorry, I have not labeled all the photos, it would be obvious from the file names. Anyway - I see lots of banding in both raw and dual iso raw much more than in a pic with the camera. I tried to pick an optimal exposure range that would not clip the white toaster.
Looking at the pics in google plus the banding is not so obvious, but looking at original output at HD resolution it reminds me of old 256 color graphics cards :(
Is this the expected result? It would not be usable, I think. Also - does DualISO only provide big improvement for non-optimal exposure?
For the sunset test with June build I only noticed banding in the sky if I pushed too hard in grading - not right out of the process (raw*/cdr2hdr)

I have not tried raw2cdng as suggested by redder yet. Does that support CDR?
Thx
Mike
#18
This is all great stuff (see I don't always whine ;).

Since some of the motivation for the raw container is performance, are there any back of the envelope numbers on perf / resolutions for 5dmk3raw? Also while this is wandering into dual iso territory - will the actual bit depth of data be the same?
I searched in the forum - but sometimes you need to know what to search for first ;)

And as someone hinted - will there be raw container support for sync-sound?
Thx
#19
Raw Video / Re: Raw goals for cinematography
September 13, 2013, 06:44:02 PM
Actually RenatoPhoto suggested moving to general chat but did not mention there was existing thread on same topic - so I will follow up there and not here.
BUT - reddeercity - I am really interested in the 2K ML shown in theatres ;) Could you provide some links to a trailer or some overview of the post / finishing you used? That is my target not still photos (factory config is fine for still photos IMHO)
Thx.
#20
Raw Video / Raw goals for cinematography
September 10, 2013, 09:53:25 PM
It seems like the direction of magic lantern raw is headed off track?
Is the goal cinematography or programming adventures?
If the former - then a stable 24fps HD raw with workflow that does not reduce quality should be primary objective. If you expose correctly with 14bits of dynamic range that is enough - the HDR and ISO bit tweaking needs to be motivated by quality issues. HDR requires high frame rate also.
Likewise for changes in raw format / in camera processing (demosaic).

My personal experience using June build on 5dmk3 is that there is more dynamic range seen by camera than makes it through workflow. Using ml member provided utilities (thx!) to get DNG and mov in one step - it sure seems like much less than 14 bits of data is making it to DNG and then mov is only prores 422. Maybe a test raw 'ramp' can be created to verify?
Is it not possible to carry 16 bits (14 data) to DNG and then full 4444 to prores?
422 is better than h264 in camera but not enough to use for indie feature work.

I have asked a couple times (no real answers) on actual processing of raw and actual bits of info in raw.
Based on ISO discussion it seems that there may be a few bits of dither 'noise' in the data. Can someone summarize that in one - two sentences?

I did a sunset test with June build with sun just over exposed and I was able to do light color grade and get a very nice crushed black and very natural looking highlight and midtone. I did not see a lot of noise in the black - but then I exposed properly and crushed the blacks a little (which will blend / eliminate minor noise in black).
So it seems the June build was close to usable - just needed some cleanup / documentation.
So what exactly is the goal of this development group?

Also I believe the frame rate is off and there appear to be some dropped frame effects even though the recording is continuous.
I will do sync sound recording (zoom or tascam) with head slate and tail slate to confirm (audio duration versus video duration between head and tail slate).

I think I would like a branch of the code that is purely targeted at indie cinematography.
Thx
Mike
#21
Does anyone have a complete documented process for building on Mac?
A few people I talked to offline said they could not get it to work.

Also - the virtu of suggestion - can someone share a virtual machine image that has successfully compiled a recent nightly? That would result in immediate zero pain access!
I mean assuming people want to share their efforts
Mike
#22
Feature Requests / Re: Genlock
June 30, 2013, 10:17:29 PM
never going to happen
might be able to implement audio track carrying a genlock and feed the headphone out to other device...
#23
Feature Requests / one menu setting for cinema node
June 30, 2013, 10:15:49 PM
Take all the miscellaneous settings that contribute to good best cinematography and make one click menu item
So say 1920x1080 center cut (true center) at 23.976 fps with audio (if possible - even if low bitrate just for sync track)
Histogram displayed until rec pressed
#24
very nice feature - could even be used instead of histogram redraw
Does not need complex math - jus percentage of pixels all white or all black
Set threshold at some %
open aperture until bar lights up and then back off - could be very intuitive