Squeezing the last bit of performance out of MLV Lite (for testers)

Started by a1ex, April 10, 2016, 11:36:35 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Frank7D

I just made a spreadsheet (copied it off a helpful website) that compares two data sets and tells us if their means are statistically significantly different (with, say, 95% confidence).

It requires every value measured to be entered, so for instance the number of frames in each clip.

I'll try it on any test results someone wants (I can make it available too, if it proves useful).

Danne

Great stuff. I couldn,t get the script to work on my mac A1ex. I didn,t work on it very hard and I,m squeezing in these tests between other things. For what it,s worth I started building a small script to do some indicating calculations, median, out of ten files. From my pragmatic testing I,d say there is maybe a slight edge to A1ex build. A lot of the clips went to 34 seconds whereas dmilligan build stopped at 33 seconds. Here are the results and the script.

First run(A1ex mlv_lite)
Canon EOS 5D Mark III

Frames(each file)
733
1012
906
981
991
975
993
692
966
973

Max
1012

Min
733

Median
922.2


Other metadata
Fps
29.97

Resolution
1920x1080


Second run(dmilligan mlv_lite
ERR:1 md:0x 0 ml:0

Frames(each file)
973
965
957
961
984
970
972
963
976
984

Max
984

Min
957

Median
970.5


Other metadata
Fps
29.97

Resolution
1920x1080



Third run(A1ex mlv_lite)
Canon EOS 5D Mark III

Frames(each file)
1029
1020
1015
1029
1014
1017
1007
1035
997
999

Max
1035

Min
997

Median
1016.2


Other metadata
Fps
29.97

Resolution
1920x1080



Fourth run(dmilligan mlv_lite)
ERR:1 md:0x 0 ml:0

Frames(each file)
787
961
961
966
966
1006
1003
1010
1003
1003

Max
1010

Min
787

Median
966.6


Other metadata
Fps
29.97

Resolution
1920x1080



The script. Put in a .command file and double click next to 10 mlv files. Gotta fix the min value. Not working properly.

#!/bin/bash
workingDir=`dirname "$0"`
cd "${workingDir}"

#Camera name
#Frames
#Max/Min
#Median
#Other metadata
#Fps
#Resolution

lock=$(ls *.MLV | awk 'FNR == 10 {print; exit}')

if ! [ -n "$lock" ]
then
clear
echo "
No cheating. 10 MLV files required"
sleep 2
exit 0
fi

#Camera name
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 1 {print}') | grep 'Camera Name:' | awk '{ print $3,$4,$5,$6,$7,$8; exit }' | cut -d "'" -f2  > mlv_lite.txt


#Frames
echo "
Frames(each file)" >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 1 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 2 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 3 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 4 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 5 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 6 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 7 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 8 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 9 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 10 {print}') | grep 'Processed' | awk '{ print $2; exit }' >> mlv_lite.txt

#Max/Min
echo "
Max" >> mlv_lite.txt
grep -Eo '[0-9]+' mlv_lite.txt | sort -rn | head -n 1 >> mlv_lite.txt

echo "
Min" >> mlv_lite.txt
grep -Eo '[0-9]+' mlv_lite.txt | sort | head -n 1 >> mlv_lite.txt

echo "
Median" >> mlv_lite.txt

echo $(awk 'NR == 04 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 05 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 06 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 07 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 08 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 09 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 10 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 11 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 12 {print; exit}' mlv_lite.txt)/10+$(awk 'NR == 13 {print; exit}' mlv_lite.txt)/10 | bc -l | awk ' sub("\\.*0+$","") ' >> mlv_lite.txt


#Other metadata
#Fps
echo "

Other metadata
Fps" >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 1 {print}') | awk '/FPS/ { print $3; exit }' | awk ' sub("\\.*0+$","") ' >> mlv_lite.txt
#Resolution
echo "
Resolution" >> mlv_lite.txt
mlv_dump -m -v $(ls *.MLV | awk 'FNR == 1 {print}') | awk '/Res/ { print $2; exit }' >> mlv_lite.txt


open mlv_lite.txt

osascript -e 'tell application "Terminal" to close first window' & exit

a1ex

Quote from: Frank7D on April 14, 2016, 09:36:58 PM
I just made a spreadsheet (copied it off a helpful website) that compares two data sets and tells us if their means are statistically significantly different (with, say, 95% confidence).

It requires every value measured to be entered, so for instance the number of frames in each clip.

Sounds great.

I'm working on a Lua script that would do the following test (all happens on the camera):

- have a few different versions of raw_rec in a directory, from which it will select a random module
- format the card (simulated keypresses in Canon menu)
- restart the camera (which will also reload the new module)
- record 10 clips
- write down number of frames from each clip to a log file
- repeat the test overnight

So, if all goes well, I'll give you a big data set to analyze. One entry (for 10 clips) would look somewhat like this:

Raw_rec version : a32a5c30
Recorded frames : 282,562,1299,966,768,691,796,528,842,588
Quartile summary: 729.5 frames (562...842)


In a nutshell, the script would try different versions (all uploaded on card in advance), record 10 test clips, write down the number of frames, format the card, repeat.

Frank7D

I just put Danne's results through the spreadsheet. I combined the two sets of a1ex numbers into one and did the same with the dmilligan numbers.

The means for the two (consolidated) sets are amazingly close: a1ex: 969.2 dmilligan: 968.55

Critical T Value (@ 95% confidence): 1.69
Our T -Stat: 0.03

When the T -Stat is less than the Critical T Value (as in this case), that means the difference is not significant (as we could guess from looking at the means).

a1ex, I'm ready for that data whenever it's done.

a1ex

Here's the script I'm currently running: rawbench.lua.

It's not yet compatible with the nightly - only the latest lua_fix branch can handle it at the time of writing. It was a pretty strong test for the scripting API - I was thinking it's almost ready to merge, but this little script revealed a lot of bugs on the file I/O side.

Be careful with this script (it's destructive, it formats your card without asking).

reddeercity

Quote from: Danne on April 14, 2016, 10:42:11 AM
@reddeercity
Did you compare your results? I don,t understand how to determine which build is the faster one. Am I missing something?
I did the test with dmilligan mlv lite build in the MLV Lite thread  so I didn't post any thing.
Plus I didn't have time the other day.

QuoteOk back on topic , I did a more comprehensive test with MLV Lite module on my 5D2 with the updated raw_rec.mo
I use the latest Nightly Build plus I compared mlv lite to full mlv ( use 2 different nighty for full MLV , feb15/2014 &Feb13/2016) and used the Old Raw format for Oct24/2013.
All tests done on CF Lexar 32GB 1066x
MLV Lite: magiclantern-Nightly.2016Feb13.5D2212  dmilligan mlv lite2nd build
1856x1004 @ 23.976 with GD (global draw)  enabled  - 1134 Frames
1856x1004 @ 23.976 with GD disabled - continuous until full (74.5MB/s write speed)
1856x1004 @ 23.976 with GD disabled + HDMI enabled (Ninja HDMI Hard Drive recorder connected) - 3215 Frames
1856x1044 @ 23.976 With GD enabled - 1089 Frames
1856x1044 @ 23.976 With GD disabled - 1662 Frames

The results of a1ex mlv lite test

3x Crop mode:
1920x1038 @ 23.976 with GD enabled - 1046 Frames (76.9 MB/s Write Speed)
1920x1038 @ 23.976 With GD disabled - 2354 Frames
1920x1076 @ 23.976 With GD enabled - 724 Frames
2048x930 @ 23.976   With GD disabled -  continuous until full (74.8 MB/s Write speed)
2048x1024 @23.976  With GD disabled - 1016 Frames

All said and done a1ex mlv lite build produce 700 to 800 more frames in 1:1 and about 300 frames in 3x crop.

Now I can only speak for the 5D2 , so as you say you are looking for speed . Well  If there was a switch to toggle Overlays off & on
Like full mlv , just the date rate file info stuff then that would just about equal the speed that the Original Raw did back in Oct/24/2013
Quote1856x1004 @23.976 + HDMI enabled (Ninja HDMI Hard Drive recorder connected) (74.6MB/s Write speed ) - continuous until full
1856x1044 @ 23.976  (77.5MB/s Write speed) -  continuous until full
1872x1012 @ 23.976 (76 MB/s write Speed)   -  continuous until full
1920x1038 @ 23.976 (79.7 MB/s write speed) - continuous until full
This is what I compare All Nightly Builds too , it's the Gold standard for 5D2.
So I look for the largest Continuous resolution the Camera can do that why I still use 1872x936 mlv full.
My 2 cents


a1ex

Looks like I forgot to initialize the random seed, so the script picked the same build every time.

@Frank7D: how significant is this difference?


Recorded frames : 379,1260,1307,1306,1061,892,572,1022,568,375
Quartile summary: 957 frames (568...1260)



Recorded frames : 329,900,1010,471,568,518,840,1257,407,471
Quartile summary: 543 frames (471...900)


(you guessed it - best and worst case runs with the same build, test done by the script)

Frank7D

Best case mean: 874
Worst case mean: 677

Critical T: 1.75
T -Stat: 1.29

Since The Critical T is larger, the difference in the means is not significant.

Intuitively, if you look at how much the number of frames varies within each set, it seems like there's a large random factor. You probably would need a larger number of data points to get a meaningful comparison. That doesn't mean each run needs to be larger though. You could just combine several runs of ten.

Danne

I wonder if It would help if I ran let,s say 40 recordings in a row and gave you the numbers from both mlv_lite versions? Now if A1ex is suddenly serving us another raw_rec.mo to test it would be a little painful  :P

Frank7D

I think as long as the camera model and settings used are the same, we can combine results from multiple testers (so no one person has to do so much). After all, we would like to know how the builds perform in general, so multiple cameras would be just fine, it seems to me.

a1ex

Got the first dataset. The script crashed at some point, so it's not as complete as I would like.


Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 286,759,717,1185,1273,1264,795,865,613,634
Raw_rec version : ML/MODULES/RAW_REC/54E90DE.MO
Recorded frames : 419,1261,1314,1319,961,574,453,1262,613,562
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 374,717,922,997,624,518,614,581,1116,812
Raw_rec version : ML/MODULES/RAW_REC/CA5315A.MO
Recorded frames : 558,1319,876,968,1329,1311,1262,511,574,-1,1141549314
Raw_rec version : ML/MODULES/RAW_REC/DE4E1D1.MO
Recorded frames : 349,781,700,304,1260,1260,1240,1264,674,635
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 449,640,1321,1333,1330,1313,1306,848,-1,1075910152
Raw_rec version : ML/MODULES/RAW_REC/C350071.MO
Recorded frames : 249,736,1276,1261,1270,1260,493,807,418,824
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 916,1300,1318,1041,849,776,1265,767,538,-1,-190905046
Raw_rec version : ML/MODULES/RAW_REC/C350071.MO
Recorded frames : 778,1188,826,831,403,741,1263,1124,549,755
Raw_rec version : ML/MODULES/RAW_REC/CA5315A.MO
Recorded frames : 892,1114,416,696,1319,1310,1302,898,634,-1,-2103303980
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 372,1037,700,1010,1260,1260,880,681,816,716
Raw_rec version : ML/MODULES/RAW_REC/CA5315A.MO
Recorded frames : 394,1293,1321,1331,1080,610,615,1248,315,-1,-2078138256
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 361,995,812,920,640,493,532,623,756,987
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 1280,1318,1299,1317,1313,1260,437,531,-1,603720293
Raw_rec version : ML/MODULES/RAW_REC/C350071.MO
Recorded frames : 905,963,589,1081,656,1164,1271,1236,666,270
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 441,790,892,1094,1078,1077,971,576,548,664
Raw_rec version : ML/MODULES/RAW_REC/C350071.MO
Recorded frames : 1248,1316,1307,1317,951,1044,1149,553
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 315,946,1089,812,669,662,704,621,782,567
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 902,910,1031,954,1308,1319,1306,678,300,-1,1825290946
Raw_rec version : ML/MODULES/RAW_REC/DE4E1D1.MO
Recorded frames : 601,788,1133,1272,1268,1266,904,830,585,-1,-1994319992
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 382,1003,857,855,775,812,574,475,683,684
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 1123,1330,1311,1315,1311,1168,582,692
Raw_rec version : ML/MODULES/RAW_REC/C350071.MO
Recorded frames : 407,1153,1219,469,744,1162,1266,1260,665,459
Raw_rec version : ML/MODULES/RAW_REC/CA5315A.MO
Recorded frames : 820,708,586,928,1322,1302,1259,799,1036,-1,1652791484
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 472,814,912,1266,1260,1263,915,707,686,-1,-926858671
Raw_rec version : ML/MODULES/RAW_REC/54E90DE.MO
Recorded frames : 671,1266,1260,1263,463,827,940,1067,553,-1,1150832025
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 999,1267,1125,728,575,1225,1122,706,818,-1,855478615
Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
Recorded frames : 916,1197,744,517,942,1158,1261,697,719,-1,-271791549
Raw_rec version : ML/MODULES/RAW_REC/60247BB.MO
Recorded frames : 640,826,603,625,737,900,1010,900,777,739


The obvious outliers at the end seem to indicate a bug when card gets full.

Frank7D

Thanks a1ex. One question:
Will I be seeing those same module names in the future as well? (just wondering if it's worth my time to build in something in the spreadsheet to extract and sort the info according to those specific names).

a1ex

No, I'll change them, but the name format will stay the same. I think it's worth writing a script that would parse a log file in this format, group data by build version, sort them by performance and compute how significant the differences are.

Frank7D

I modified my spreadsheet to automate more things but it's still more manual than it needs to be. I'll try to refine it. In any case, I should be faster next time.

a1ex, I put your data sets in (excluding the outliers you mentioned) and of the six modules the only meaningful difference in the means was with 60247BB.MO, which all the other five modules beat.

So, tied for first are:
14E2991.MO (mean=963, set size=89)
CA5315A.MO (mean=952, set size=36)
C350071.MO (mean=907, set size=48)
54E90DE.MO (mean=897, set size=19)
DE4E1D1.MO (mean=901, set size=19)

and in sole possession of last place is:
60247BB.MO (mean=742, set size=60)

a1ex

Thanks - the slower one had a mistake (entire frame was aligned at 64 bytes instead of 512; I wanted to align only the EDMAC). The other ones had various fine-tunings that don't seem to make a difference.

Here's another set (I've ordered the lines manually and dropped the last two numbers from each line, since that's where card may get full).


Raw_rec version : ML/MODULES/RAW_REC/36B111D.MO
832,1318,1299,1282,1151,1209,1177
966,1262,1211,1244,1231,1276,1266
1039,1285,857,1283,1291,1285,1280
941,1257,959,1283,1288,1283,1292
1073,1153,1163,1283,1297,1284,1132
707,1289,1260,1282,1066,1215,919

Raw_rec version : ML/MODULES/RAW_REC/E203CF9.MO
778,1260,976,1168,1262,1260,1274
1135,1069,1263,1284,1294,1260,1066
932,1159,1136,1040,1260,1265,1249
971,1035,990,1268,1260,1261,1147
1128,1161,1223,1038,1000,1260,1116

Raw_rec version : ML/MODULES/RAW_REC/14E2991.MO
1027,1116,1037,1318,1308,1282,1260
669,1259,1251,1166,1007,1169,1265
837,1172,1184,1148,818,1268,1256
890,1154,875,1268,1258,1263,949
916,1170,1171,1266,1154,1174,1137
890,1260,1187,1267,1171,1103,956
1050,1170,1206,1145,795,1174,1260
1038,957,999,958,1260,1260,1108
767,1161,1065,1031,1264,1262,1038


I've found a way to compute the T value in octave, but I'm not yet sure how to interpret it. The help says:

-- Function File: [PVAL, T, DF] = t_test_2 (X, Y, ALT)
     For two samples x and y from normal distributions with unknown
     means and unknown equal variances, perform a two-sample t-test of
     the null hypothesis of equal means.

     Under the null, the test statistic T follows a Student distribution
     with DF degrees of freedom.

     With the optional argument string ALT, the alternative of interest
     can be selected.  If ALT is "!=" or "<>", the null is tested
     against the two-sided alternative 'mean (X) != mean (Y)'.  If ALT
     is ">", the one-sided alternative 'mean (X) > mean (Y)' is used.
     Similarly for "<", the one-sided alternative 'mean (X) < mean (Y)'
     is used.  The default is the two-sided case.

     The p-value of the test is returned in PVAL.


So I tried:

>> a = [ <paste first dataset> ](:);
>> b = [ <paste second dataset> ](:);
>> c = [ <paste third dataset> ](:);

>> [median(a), median(b), median(c)]
ans =
   1258.5   1161.0   1166.0

>> [p,t] = t_test_2(a, b, ">") % is a > b ?
p =  0.19725
t =  0.85642

>> [p,t] = t_test_2(b, c, ">") % is b > c ?
p =  0.13103
t =  1.1282

>> [p,t] = t_test_2(a, c, ">") % is a > c ?
p =  0.022017
t =  2.0388

>> [p,t] = t_test_2(a, b, "<") % is a < b ?
p =  0.80275
t =  0.85642

>> [p,t] = t_test_2(b, c, "<") % is b < c ?
p =  0.86897
t =  1.1282

[p,t] = t_test_2(a, c, "<") % is a < c ?
p =  0.97798
t =  2.0388


So, to me it looks like a (36B111D) is a little faster than the other two (E203CF9 and 14E2991). I actually expected it, as it fixed a bug introduced with the SRM memory backend, where we've got memory buffers larger than 32MB, which are slower to write (limit is 32MB-512B). The latter is the maximum size that can be written with one DMA transfer, so larger buffers would require two transfers; the second transfer is usually small (slower to write).

Between the other two, I'm not sure what conclusion to draw. Median is a tad lower on b, but the difference is tiny. The T number is small, so probably the difference is not significant, but the P values (0.13 vs 0.87) seems to indicate that b might be a little bit faster. Should I run the test between those two for a longer time, or should I consider them equal?

(P.S. To see what those codes mean, look them up on the repository.

Frank7D

A: 36B111D.MO
Count = 42
Mean frames = 1178

B: E203CF9.MO
Count = 35
Mean frames = 1150

C: 14E2991.MO
Count = 63
Mean frames = 1115

A vs. B
T -Stat: 0.856418986
Critical T: 1.665996224

A vs. C
T -Stat: 2.03877008
Critical T: 1.66319668

B vs. C
T -Stat: 1.128192618
Critical T: 1.667572281

So A beat C (36B111D.MO beat 14E2991.MO)

I notice that your "t" value equals my "T -Stat" value.

"Should I run the test between those two for a longer time, or should I consider them equal?"

Maybe. My spreadsheet couldn't find a winner between A and B or B and C, so maybe B needs more points.

DeafEyeJedi

Holy Moly! It's been awhile since I last jumped into this puddle of mud. Excellent progress you all have made so far to those that contributed and to me this is just a wonderful piece of Art.

Love the collaborations you all have created into this thread and now that I'm finally back at home from being on road away from my lab. It seems 5D3, 7D have been throughly tested.

Does this mean it would be best for me to do this speed test with a different camera model other than those two I've spotted so far?

I also noticed there are new Nightlies that just came out last night (54e90de) ... Should we be testing this on the latest, correct?

I'd prefer to use @a1ex's build to squeeze the last bit of performance out of this. Is it wrong for me to feel this way?  :P
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

a1ex

You can try these two pull requests, and let us know if you see any difference in practice:

- (RAW) https://bitbucket.org/hudson/magic-lantern/pull-requests/710/raw_fixes-part-2/diff
- (MLV Lite) https://bitbucket.org/hudson/magic-lantern/pull-requests/685/proposal-completely-replace-the-old-raw

Builds (these two should be identical speed-wise):
- raw: raw_rec.mo (e78c7b8)
- mlv: raw_rec.mo (30bb06a, not tested)

The change that gave the highest score in the above benchmark is included in both.

For core, any recent version that loads the module is fine. Latest nightly won't make any difference, compared to the previous one (see change log). The nightly with the black level fix should make a difference if you actually had problems with the black level. Speed-wise, I don't expect any differences coming from the core.

So, the questions for these two builds are:
- are the speeds identical in both? (as you could see, this question is not very easy to answer)
- how do they compare (speed-wise) with the raw_rec from the nightly?

If the answers are favorable, that means we have achieved dmilligan's goal with MLV Lite: save valid MLV instead of RAW, with performance identical to the old raw_rec. Plus a tiny speed boost.

There's still room for increasing the speed, with MLV only (can't be done with raw). We can exploit a MLV feature that allows the frames to be in any order in the file. No idea how much the speed gain could be, though.

I'm tempted to merge the current version of mlv_lite to nightly if it actually proves to be identical to the original raw, as it seems pretty solid and the code changes are minimally invasive. After that, we are free to explore further optimizations.

Frank7D

By the way, just for clarity, when the versions "tie" that doesn't prove they're definitely the same, just that we haven't been able to show they're different. It could be that a larger data set would reveal a difference.

Ottoga

@A1ex
Do you still want more test data from a 7d?  If so give me a link to your test raw_rec modules and I'll run your test script against them.
EOS 7D.203, EFS 55-250mm, EF 75-300 III, Tamron 16-300 DiII VC PZD Macro, SpeedLite 580EX II.

a1ex

Yes, with builds from reply #42.

I'm waiting for test results for those builds before continuing, because I want to merge things into nightly.

Ottoga

@a1lex
Is there a compiled version of the rawbench.lua script available. I don't have the facilities to compile.
Also, in your post you say that this script is not compatible with the latest nightly, is this still the case? if so is there a test build of a compatible nightly available?
EOS 7D.203, EFS 55-250mm, EF 75-300 III, Tamron 16-300 DiII VC PZD Macro, SpeedLite 580EX II.

a1ex

Lua scripts are interpreted directly on the camera (no need to compile, just place it in the ML/SCRIPTS directory).

I'll test lua_fix again today, and if it passes on 5D3, it will be merged in the next nightly.

edit: sorry, I've already found two regressions in core code, and a major issue on the I/O side. I've fixed them, but the changes were not trivial, and I don't want to do another rushed merge.

Ottoga

@A1ex

I'm making some progress, the script is trying to run. however, I see the following error message on the screen when I turn the camera on and enter the ML menu system.



Stack Traceback:
[c]: in function "require"
ML/SRIPTS/RAWBENCH.LUA:21: in main chunk



Looking at the script, I assume that I'm missing addition scripts called "logger" and "stats"


Camera conditions are:
April 19, 2106 nightly
Camera AV mode
rawbench.lua installed within the ML\scripts directory
Modules loaded:  raw-rec.mo, lua.mo
ML - Global draw off, defaults for all other settings
EOS 7D.203, EFS 55-250mm, EF 75-300 III, Tamron 16-300 DiII VC PZD Macro, SpeedLite 580EX II.

a1ex

I didn't merge lua_fix yet. Will test it again tonight.

edit: sorry, I'm very tired. I managed to update the script to analyze MLV files as well (code committed), fixed a few more minor things in Lua, but I don't have enough energy to run the tests for merging into nightly. I could really use some help here...