Draft: Add burst processing. Use postprocess.py instead of postprocess.c
Adds burst processing via python. Use python postprocessing instead of c implementation. Signed-off-by: Luigi311 git@luigi311.com
Merge request reports
Activity
I do not have a device that runs millipixels to test this since it fails on the pinephone but calling the script directly and pointing it to a megapixels burst folder works. This is heavily inspired by https://github.com/luigi311/Low-Power-Image-Processing but minimized to only have what it needs and removes the container fallback since i dont think you guys would want that.
I am not sure how dependencies for python would be handled but here are the python packages required
opencv-contrib-python numpy rawpy h5py scikit-imageadded 1 commit
- f42b6b88 - Add burst processing. Use postprocess.py instead of postprocess.c
Without this change, postprocessing of a big cam's photo takes 3 seconds. With this change, it's about 40 seconds.
I don't think we want to use burst processing by default. It would be good to have as an option or used only in low-light conditions, but for well-lit photos it won't provide much improvement and will only cause excessive motion blur on moving elements. It also seems to cause exaggerated hot pixels in low-light. It may make sense to use it unconditionally on the PinePhone like Megapixels does, but we have a higher quality sensor where it's not such obvious win. I'd rather focus on things like sharpening, de-noising and contrast enhancement with some heuristics on when to use what and try to find ways to do it really fast (not sure if Python will be helpful for this goal).
Note that the current C implementation was explicitly made to avoid having to call convert and exiftool separately, as that wastes a lot of time for no reason. You're reverting to the old behavior here, which is a hard NACK from me.
For the record, the dependencies you're looking for are
python3-opencv python3-numpy python3-h5py python3-skimage
. You're also missingexiftool
andimagemagick
. rawpy is not packaged in Debian/PureOS.The shellscript doesn't work at all right now because it uses DOS linebreaks and tries to call
python
where it should callpython3
- or even better, the script itself with correct shebang. Also,src_py
directory doesn't get installed, all the files end up in/usr/share/millipixels
flat, so the script fails to import them. And at the end, exiftool failed to save the tags:[pon, 8 maj 2023, 20:50:12 CEST] post_process: all_in_one: Returned: Not an integer for MinoltaRaw:ImageWidth Not an integer for MinoltaRaw:ImageHeight Can't convert Sony:ColorSpace (matches more than one PrintConv) Error: Error writing output file - /home/purism/Pictures/IMG20230508204932_processed.png 0 image files updated 1 files weren't updated due to errors
...and the thumbnail button doesn't work anymore.
The produced image (PNG, which BTW breaks the preview button as is) seems to ignore the white balance set in the app and doesn't do brightness correction as the current postprocess code does.
I'm afraid that even after fixing the issues above, this isn't going to be a viable approach.
Edited by Sebastian KrzyszkowiakCan you open up the log and see what takes the longeset amount of time? curious what causes the longest amount of time pushing it to 40 seconds. It might be exiftool like you were saying since i never actually got around to implementing something else to to handle it in python.
Can you post the photos of the low-light that show the hot pixels? That isnt something ive ever seen so far but i havent tested this on anything but pinephone/pinephonepro images. Even better can you upload the burst directory somewhere so i can use it to test?
The current implementation should be doing sharpening, denoise and contrast enhancement already and its done via opencv/numpy so it all gets done in C.
Convert isnt required in this implementation since you can set the internal and external image extensions to jpg and it will utilize skimage to do this which should be fast. If the external and internal image extensions are the same it will just do a simple mv.
Damn DOS linebreaks. Its always a hassle when im developing on my windows machine and i even did a dos2unix on it a few times and tried working on it via WSL to see if that would resolve it but looks like it still made its way back in.
I've never used meson.build but i assumed adding in that install_data is what copies over that src_py so should i just specify src_py instead of all the individual files?
Edited by Luigi311added 1 commit
- 4544956d - Default jpg, Add quality to postprocess, Fix command
The log file it generates should give you a details output of what steps are actually taking long to run, i did just notice that the postprocess.c actually uses half_size while mine does not so i can add that in to match. We can also adjust the workflow so the single_image only does processes 1.dng and does the processing for that similar to the current implementation and then the post_process run will read the other dngs and append it to the hdf5 that the single_process would generate. We can then make the post_process step optional like you recommended based on if its a dark image or not.
doing some testing on a OG pp it looks like it takes around 1-2s for postprocess.c and 20-24s for both my images. Of which it looks like its 6-7s to generate the single image but reading in all the dngs. It then takes another 9s to do post_process and another 6s or so for exiftools. Need to see what postprocess.c is doing for exif so i can implement something similar and remove exiftools to shave off a lot of time.
In our case exiftool was adding about 2-3 seconds IIRC.
Keep in mind that images coming from PinePhone's camera are lower resolution.
Edited by Sebastian KrzyszkowiakThats true but im also testing with a og PP which is a lot slower to do anything in general lol. So while its only 5MP instead of the 13MP im still testing everything relative so its 1-2s with postprocess.c on a og pinephone 5mp dng file. If you can upload the burst directory somewhere i can test with the full 13mp dng files that the librem5 generates so i can see how it scales and will work relatively atleast on the librem 5.
Sorry about that building shouldnt fail anymore though there will be some python packages missing. I havent tested it via millipixels only running it directly so im not sure if it will actually work when running millipixels. Doing some quick testing on my PPP with a OG PP burst looks like postprocess.c takes around 0.5s and my python script takes 6.3s but of that only 1.1s is the actual computing and the rest is importing opencv2. This is also with a 5mp photo vs the librem 5s 13mp so not sure how much of a difference it will be with 13mp, i would assume the gap would tighten.
Edited by Luigi311