Posted on

Tony Northrup is wrong (about Adobe Super Resolution)

Take a look at the left and right halves of the image below. Hover with your cursor and click.  These come from the same Canon RAW image. The left half is unprocessed, the right has only been upscaled using Adobe’s Super Resolution. The difference is clear.  Tony Northrup’s YouTube video on Super Resolution was sent to me independently by two members of a local photography club.  In it, he claims that Super Resolution is useless on all but Fuji X-trans files.

 

He’s wrong.  He misses the  primary use of the feature, namely, for shots that are heavily cropped. This photo is such an example. Ignore that it is boring; I grabbed it from an online sample.

Understand that the difference between the left and right images is hardly an anomaly. You will get similar results with any reasonably sharp low-pixel image.  The Super Resolution feature would yield similar results for a fuller frame image that had to be blown up to a very large size, such as a wall mural.

Northrup’s conclusions only apply to the case he presented: a well-composed full frame image displayed at moderate size.  (Even so, he compared an unprocessed Super Resolution image with an image he further tweaked for detail – not quite fair.) 

So for the images he worked with, he is correct that improvements are too marginal to be worth the effort.  Perhaps Northrup has no shots that suffer the low-resolution blues due to heavy cropping.

I am not so lucky.  So I used Jeffrey Friedl’s Data Explorer, a crazy useful plugin (grab it and tip him a few bucks) that allows Lightroom to find and group images by more than 200 data criteria – criteria like crop-amount.

I found dozens of images cropped at a rate of 50% or more that easily become candidates for Super Res treatment! These images become “rescue” images, and I hope that in the near future I’ll be able to batch-process them in Lightroom’s super resolution implementation (soon please, Adobe.)

Side note:

Fuji X-trans RAW files represent a special case; they require a specialized processing, and Lightoom’s less-than-stellar treatment has often led photographers to seek third-party solutions.  Some of these X-trans images will benefit from Super Resolution even at more “normal” sizings.

See my original post on Super Resolution which also has other image samples.

Below, a rather extreme blow-up.  

[twenty20 img1=”24032″ img2=”24031″ offset=”0.5″ before=”without” after=”with super resolution”]

LImagine these two treatments represented two different lenses. Would you want to take one back?

[twenty20 img1=”24095″ img2=”24096″ offset=”0.5″]

The original photo, to illustrate size. To reiterate, super resolution won’t make a difference unless you’re blowing an image up to a very large size, or using a very severe crop.  In either of these cases it can make a large difference.

Take a look at the left and right halves of the image below. Hover with your cursor and click.  These come from the same Canon RAW image. The left half is unprocessed, the right has only been upscaled using Adobe’s Super Resolution. The difference is clear.  Tony Northrup’s YouTube video on Super Resolution was sent to me independently by two members of a local photography club.  In it, he claims that Super Resolution is useless on all but Fuji X-trans files.

He’s wrong.  He misses the  primary use of the feature, namely, for shots that are heavily cropped. This photo is such an example. Ignore that it is boring; I grabbed it from an online sample.

Understand that the difference between the left and right images is hardly an anomaly. You will get similar results with any reasonably sharp low-pixel image.  The Super Resolution feature would yield similar results for a fuller frame image that had to be blown up to a very large size, such as a wall mural.

Northrup’s conclusions only apply to the case he presented: a well-composed full frame image displayed at moderate size.  (Even so, he compared an unprocessed Super Resolution image with an image he further tweaked for detail – not quite fair.) 

So for the images he worked with, he is correct that improvements are too marginal to be worth the effort.  Perhaps Northrup has no shots that suffer the low-resolution blues due to heavy cropping.

I am not so lucky.  So I used Jeffrey Friedl’s Data Explorer, a crazy useful plugin (grab it and tip him a few bucks) that allows Lightroom to find and group images by more than 200 data criteria – criteria like crop-amount.

I found dozens of images cropped at a rate of 50% or more that easily become candidates for Super Res treatment! These images become “rescue” images, and I hope that in the near future I’ll be able to batch-process them in Lightroom’s super resolution implementation (soon please, Adobe.)

Side note:

Fuji X-trans RAW files represent a special case; they require a specialized processing, and Lightoom’s less-than-stellar treatment has often led photographers to seek third-party solutions.  Some of these X-trans images will benefit from Super Resolution even at more “normal” sizings.

See my original post on Super Resolution which also has other image samples.

Below, a rather extreme blow-up.  

Posted on

Adobe Super Resolution – Fuji X-trans Game Changer!

Impatience got the best of me so I didn’t wait for Adobe’s new Super Resolution feature to reach Lightroom (it’s said to be coming soon).  So I tried it in Photoshop’s Camera Raw. Let’s cut to the chase – the results in certain circumstances are nothing less than staggering.

The following images tell the tale.  The one on the right is the Super Resolution image with four times the number of pixels as the original.

Note that this example is extremely blown up to 200% for the comparison.  At normal viewing levels, the differences aren’t nearly as impressive (more on this later).

Fuji shooters know that certain features such as leafy vegetation haven’t done so well with Adobe’s demosaicing algorithm. Fuji’s X-trans sensor uses a non-standard photosite array that while resolving some issues, has not had the greatest results with non-specialized (read: Adobe, for one) RAW sharpeners.

Tech note:

The easiest way to run it currently is by opening your RAW image (or jpg, but why?) in Photoshop (set to open files in Adobe Camera RAW mode). It’s hidden under the three dots and “enhance image.”

The massive file produced is autosaved to the original directory. You need import it into Lightroom.  I’ve had intermittent app crashes, and so far the best results seem to come when I close Lightroom and open the target file after Photoshop is already loaded, but have seen no clear reporting of this at the Adobe site. Your mileage may vary.

The below image represents a 200% blow-up.

[twenty20 img1=”23850″ img2=”23851″ offset=”0.5″ before=”Fuji RAW” after=”Super Resolution” hover=”true”]

A picture does say a thousand words, doesn’t it?  This was shot on my Fuji X-E2 which has 16 megapixels. It might forestall my need to upgrade in the neverending chase for more pixels.  I don’t know if images from cameras using traditional Beyer sensors will see as marked improvement.

Tony Northrup in a youTube video Photoshop Super Resolution: 4X megapixels (actually tested-surprising!) reports that the enhancement offers little improvement for non-Fuji images.  Tony is wrong by being right only in a limited sense:  Wrong About Super Resolution.

How does Adobe do this magic?  You’ve probably been hearing a lot more about artificial intelligence (AI) recently.  From Adobe’s website: “The idea is to train a computer using a large set of example photos. Specifically, we used millions of pairs of low-resolution and high-resolution image patches so that the computer can figure out how to upsize low-resolution images.”

Prior to AI, achieving higher resolution was done by blowing an image up to double its dimensions and then using a mathematical algorithm (bicubic interpolation) which essentially smooths the image by giving each pixel a bit of information from its neighboring pixels.  (Imagine each pixel as the center of a tic-tac-toe board, “borrowing” a little bit of information from each of its eight neighbors.)

With AI, something very different is happening; new information is added based on what the software thinks (from massive trained experience) should be there!

It should be understood that by creating pixels out of whole cloth, so to speak, AI can create problems of its own.  The information supplied might not be right. Artifacts can be introduced.

Below: the same image at 100%.  Notice how at this resolution, differences are minimal. Pay close attention to the bricks, directly under the glass portion of the light, the bare branches to the right of the light, and the bare branch that parallels the light. Both detail and color are improved, but ony marginally.

[twenty20 img1=”23883″ img2=”23882″ offset=”0.5″ before=”Fuji RAW” after=”Super Resolution” hover=”true”]

What’s the takeaway here?  If you’ve captured a scene full-frame and it is displayed at a normal size on, say the internet, or a 4×5 sized print – the difference will be visible, but very marginal.  But say you’re blowing up the image to an 8×11 or much larger print – then the difference can be very visible.

Let’s take a different example: you’ve taken a picture but discover in post-processing that you want to crop heavily.  Or perhaps you would have rather used a telephoto lens, but didn’t have one with you.  Blowing up your image would normally have shown extreme degradation.

Stephen Bay has done a super comparison of Super Resolution to Gigapixel AI, a product of Topaz software. Both products do essentially the same thing, with similar results.  I might prefer the Gigapixel treatment slightly; I like the denoising they add, not see it as fakey as Stephen does and am not bothered as much by the artifacts.

But these are quibbles; both products create magic.  It should be noted that both products create new image files that are much larger than the original RAWs. My Fuji shots are around 33mb in size and Super Resolution adds a new file about eight times larger!  In other words, this is a process best reserved for truly deserving shots.

The Topaz product, according to Bay, takes several minutes to process an image.  the damage for Adobe’s isn’t nearly as great; it took under a minute and a half for my XE-2 RAW on a mediocre computer.

Imagine these two treatments represented two different lenses. Would you want to take one back?

[twenty20 img1=”24095″ img2=”24096″ offset=”0.5″]

Posted on

Sharpen AI by Topaz Labs – a Winner!

Not long ago, I had the opportunity to photograph Kiane, a lovely Minneapolis-based model. One of my favorite shots of her was spoiled because I missed the focus. (Note to self: avoid using manual focus lenses in situations that are risky.)

Enter Sharpen AI, a software product created by Topaz Labs. Their claim was sharpening “repair jobs” that border on the miraculous, so I thought I’d give them a try. I’ll cut to the chase and present the before and after. The left ‘before” side represents my best effort to sharpen the image in Lightroom, the right “after” image is with Sharpen AI by Topaz Labs.

I have seen some “knock your socks off” examples, but this is not one of them. On a mobile device you won’t be able to see the difference, but on a desktop computer, particularly around her eyes and mouth, the difference is obvious. And it is exactly the difference between a shot that doesn’t quite make it, and one that does.

A side note or two. The image was shot on a Fuji X-E2. Results with Sharpen AI seemed to be better if I did not try to sharpen in Lightroom first, but this should be considered an early result. Also, Sharpen AI has three different sharpening “specialty” modes and I would not have considered the softness in this image, (exposed at 1/250s) to be a result of motion blur. But Sharpen AI’s auto-detect said that stabilization mode was the best way to go, and indeed it was. 

[twenty20 img1=”22753″ img2=”22757″ offset=”0.5″ before=”Before” after=”After”]

On the right is another example. Viewed from a desktop computer, the difference might be marginally noticeable, and on a phone is invisible. Now click on the image for a blow-up. The difference is obvious. (The left image is a little over-sharpened – I didn’t take the time to fix it.)  This helps illustrate an important point: sharpness is largely dependent on resolution, which is a function of viewing size and/or viewing distance.

This principle illustrates the “danger” of pixel-peeping; you can waste a lot of time and effort working to sharpen an image to use at a size or (less often) distance that renders the additional sharpening unnoticeable. Learning when and where it matters is key.

Posted on

Einstein & Monroe

You may have seen this illusion. Look at it close-up, and it is Albert Einstein. Back away a few feet and it is Marilyn Monroe. What is going on here? You’re seeing the effects of frequency resolution.

Close-up, you’re seeing the finer detail that is familiar to you as details of Einstein’s face. When you back away, (or perhaps, remove your glasses) that fine detail becomes lost to you in the distance, only to be replaced by the softer, less detailed information of Marilyn’s face.

It’s a little bit spooky and your initial takeaway might be that seeing is not believing.

Posted on

Lightroom Enhance Details and Fuji – a winner!

Note: this was written late 2019.  Fast-forward to mid-March 2021, Adobe launches “super resolution” – currently only in Adobe RAW and Photoshop. I will review it when it comes to Lightroom.

Introduction

Almost all my Fuji image-gurus believe the images produced using Lightroom’s new Enhance Details feature are as good, if not better than any other solution, and I’ve come to the same conclusion myself.

But this doesn’t mean it should be used on every image for a few reasons: disk space, processing time, and workflow. 

Disk space 

My normal XE-2 RAW filesize is 25mb. Enhance Details creates a DNG file of 70-90mb.  Let’s call it 75mb to make the math easy. Basically, each enhanced image will take up 3x additional disk space (you will be keeping your original raw).

Now, for reasons I’ll explain, it is extremely unlikely that you’ll want to enhance more than 5% of your images. In this scenario, you’d require 15% more disk space for your images.

Based on the current cost of hard drives, the added cost of an enhanced image is negligible.

Processing Time

On my older computer it takes a minute or two to create an enhanced image. That sounds terrible, but wait. Newer computers with dedicated graphic cards are taking under ten seconds per image, sometimes under five seconds!

If you’re not using one of these cards now, there is probably one in your future; Adobe is now solidly leveraging their capabilities in the development module.

But even with a slow dinosaur computer, a few key points are in order:

Enhance Details can wait until after you cull and do post processing. But most importantly, it’s not for all, or even most images, but only a small minority of them.

This is an important point – as real as the improvement can be, it will matter little on images that aren’t blown-up large for printing, and even then improvement will be marginal.

And bear in mind that as image size increases, our viewing distance increases somewhat proportionately; we usually view large pictures at a distance of several feet.

Pixel-peeping mamas

All this is to address the OCD malady called pixel-peeping. It has been discussed enough that I needn’t add more than this perhaps disheartening point: only a small percentage of your images are likely to deserve and benefit from this enhancement.

For social media use? Don’t even bother, it’s just not worth it at that size.

There is one special area where Enhance Details can become important; for those shots – we all have them – where the discovered “real” picture is a substantially reduced crop of the original image.

Imagine that your discovered picture occupies only a quarter of your frame. This is where the boost from an enhanced image could give you the improvement that turns “didn’t quite make it” into a win.

Conclusion

Once I started working with Enhance Details and figuring out the actual costs in both time and money things became startingly clear to me.

Yes it’s a pain to have larger files, and to have to wait for Adobe’s AI to do its magic, But it’s on a tiny fraction of images, and you make it all back on workflow.

You don’t need to go outside of Lightroom, you don’t need to be thinking about two systems. That’s worth a lot. 

Posted on

John’s Big Note

“John’s Big Note is a very simple plugin to add a big free-form note for your own metadata.”

If you’ve worked in Lightroom for any length of time, there’s come a moment when you’ve wanted to leave yourself a note about the photo, be it details of the shot, or development notes. This plug-in is for you.

The developer John Beardsworth is a Lightroom guru on top of being a great photographer – here’s his photography: http://www.beardsworth.co.uk/photos/ and here’s his Lightroom section: http://www.beardsworth.co.uk/lightroom/.  John writes “Big Note is free and should do the very limited job for which it’s designed. But it is totally unsupported.” He’s serious; after installation, the Lightroom Plug-in Manager reads “No free support. If you can’t figure out how to use this plugin, it’s probably not for you.” http://lightroomsolutions.com/plug-ins/big-note/

I have installed this and it works great! If you’ve installed any plug-ins, it should be relatively easy for you. If it isn’t and you still need it, give me a shout.

 

Posted on

“Chasing” in Lightroom

In construction, an accumulation of small errors can aggregate to create a large catastrophic error.  I once heard someone use the word “chasing” to describe this phenomenon. The way to avoid the problems was to measure total distance and subdivide rather than adding together smaller measurements.

When you work in Lightroom or other post-processing applications, there are many different areas you can adjust color, for example: light balance, vibrance, saturation, the HSL/Color panel, and split tonality. For the braver, there are color curves, and even the basic calibration color adjustments.

If you’ve worked with these a bit, you’ve probably experienced this chasing – think of chasing your tail.  Here’s a way to visualize this: turn off one, or a few of your color or tone adjustments. If the results are garish, this probably means you’ve used a color or tonal adjustment to compensate for an overuse of one or more other adjustments.

It’s sometimes hard to see this phenomenon as it is happening, and indeed, your vision and direction can change as you are processing with your progressive adjustments leading you in a different direction.

But often when you see this effect, it’s a good idea to take a Lightroom snapshot, and begin reversing course.

Ideally, each control should create a small improvement in your overall color tonality. Turning any one control off, or zeroing it –  should result in a small marginal difference.
(In rare cases, two opposing adjustments might combine to achieve better results than if one were used – but I this is an exception rather than a rule.)

Posted on

Accidents will happen

Tree Mandala

Working on an image I’d taken of some trees, I imported two slightly different Lightroom treatments of that image into Photoshop. My thinking was I might blend them to get something between the two treatments. On a whim I started playing with some different blend combinations and saw one I really liked (a subtractive blend).

But when I blew the image up on my screen, the effect disappeared!  I soon realized that the rendering I liked was a quirky result of a screen optimization process. Usually these optimizations give you a good representation of the image, but under extremes they can sometimes be weird. This was one of those times.

What this meant was the resultant image that I liked and wanted to continue processing didn’t really “exist” – it was a rendering artifact that only existed on the screen.

So to capture it I had to take six enlarged snapshots of the screen image, paste them together, and use the reassembled image as a starting point for more post-processing refinement. Luckily the size was large enough that I could avoid low-res pixelization.

The image you see is the further result of arranging four of the final images symmetrically to create a tapestry.

The following image is the result of a similar sort of accident. Frustrated with my inability to make the image work for me in Lightroom, I sent it over to Photoshop. There I idly experimented with some of Photoshop’s light-mixing modes and suddenly I had something I liked.

It would be misleading, however, to say that for either image I just got lucky. Stumbling upon the accident only happens after many trials and errors. And when the accident is found, the work has just begun.

Just as with normal photos, the “finishing” of post-processing is important, and with the accidents, it can be more arduous.

Posted on

Four ways of looking at a building

When you look at these images of a building, each taken at a different time of day, notice the different quality of light. How does our mind see times of day out of shades of light?
Early morning:

Mid day:

Sundown:

Twilight:
.
You may have guessed the reveal even without the couple in the lower left: these are all the same image, each rendered differently in Lightroom.