Home · Register · Join Upload & Sell

Moderated by: Fred Miranda
Username  

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist
  

FM Forums | Post-processing & Printing | Join Upload & Sell

1
       2       end
  

Archive 2013 · Lightroom 5 Performance Testing: Pt.2 - SSD's

  
 
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #1 · p.1 #1 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Continuing my latest series of LR5 performance testing, I am looking at the impact of SSD’s on how fast LR5 runs. I was going to do this as Pt.3, but i have a strong suspicion that the Develop module testing will show exactly the same results as for LR4.3 last year, and i reckon folk are more interested in the SSD question anyway: Historically this seems to be a pretty confused issue so I hope to bring a bit of fact-based clarity to the table here.

In case you have missed it, the first part of the current performance testing series is at:

https://www.fredmiranda.com/forum/topic/1220376


The system on test is again:

Intel i5-3570K clocked at various speeds from 1.6Ghz to 4.3Ghz, with turbo boost off.
Asus P8Z77-V-Pro motherboard
16GB of DDR3-1600 RAM
128GB Samsung 830 SSD - Boot drive (mounted as C-Drive)
80Gb Intel X25 G2 SSD – Mounted as G-Drive
320GB Seagate Barracuda 7200.10 – Mounted as F-Drive
4GB RAMDISK – Mounted as O-Drive
Assorted other hard disks which don’t come into play here.
Lightroom 5.0

A word on the disks:
- The Samsung 830 is about 1 generation old. On paper it should read at around 520MB/Sec and write at around 320MB/sec. In this test it is hosting the OS (Win7), which will influence performance a little. Unfortunately I can’t change that without a lot of work.
- The Intel X25 G2 is an old SSD now. On paper it should read at around 250MB/sec and write at around 80MB/sec. These numbers may seem low, but the model has a reputation for displaying very low latency, which really helps performance.
- The Seagate Barracuda was a good 7200rpm drive in its day, but that day was some years ago now. On paper it should give around 75MB/sec reading and perhaps a little less in writing (I can’t find exact numbers).
- The RAMDISK is a mounted “drive” built from physical RAM. I used the freeware version of the DataRAM software to build it. In terms of speed, nothing comes close – I measure 6GB/sec read speed with HDTach. Write speed is probably the same.

All the drives are SATA interface (except the RAMDISK, obviously)

System monitoring is done using the nifty utils called “process explorer” and “process monitor”, formally from systinternals, but now from Microsoft. Timing is done by hand using the stopwatch on my phone.

Test setup

For testing, I’ve used the same catalogue and images as for the other tests in this series. So all tests are performed on an empty catalogue specifically made for this testing, and a directory of 200 images (100 from a Canon 5D and 100 from a Fuji X10). Some additional testing was done with a second catalogue containing 1 image only – a 454MB, 13147 x 4532 pixel panorama chosen to really slow things down.

LR has 3 particularly active directories: The image directory (where the images are kept), the catalogue directory (which houses the LR catalogue and Previews) and the Camera Raw cache directory. There is additionally some traffic to a temp directory in the users home directory on C-drive (c:\users\username\AppData\Local\Temp for those interested). This last one is fixed, but you can move the others around as you wish. The only thing you can’t do is separate the catalogue from the previews.

So the testing method should simply be to locate the catalogue, cache and image directories on different drives and see how fast things go in the different configurations. As this gives an enormous amount of testing I cut the variables down a little to “read” and “write”, with the image directory being “read” and the catalogue, previews and ACR cache being “write”. The catalogue, previews and ACR cache were then located on the same drive for most tests (all except the RAMDISK tests in fact). As well reducing my testing, this also makes data presentation easier as the results can be given in a simple easy to read matrix/table.

Test 1 – Library module: Importing

A simple test, see how long it takes to import the directory of 200 images and render 1:1 previews. The CPU is set to 4 cores running at 4.3Ghz for this test.

http://farm3.staticflickr.com/2855/9057000072_cc00f42517_o.jpg

In case it is confusing, the left axis is presenting where the image directory is located (data reading) and the top axis is where the Catalogue and ACR Cache are located (data writing).To read, just pick the value where column and row cross each other. So for example, the import time when the catalogue is on the X25 and the images are on the HDD would be 4m 11s. Similarly, the import time when both catalogue and images are on the SSD 830 would be 4m 13s. Hopefully you get it.

I also note that I did a couple more tests using the RAMDISK. With these tests I took the additional step of separating the ACR cache, catalogue and images to separate “drives”, which is why the results aren’t in the table above. The results for these were:

1. ACR cache on SSD 830, Catalogue on X25, images on RAMDISK – import time 4m 12s
2. ACR cache on X25, Catalogue on RAMDISK, images on SSD 830 – import time 4m 18s

The conclusion here is pretty obvious I think – there is no life-changing speedup available by moving to a superfast disk subsystem. In fact, the import times seemed remarkably independent of the speed of the storage media – putting everything on a slow HDD gives pretty much the same result as distributing the load across SSDs and RAMDISK.

I admit I am surprised: I expected to see some benefit to faster storage, but the numbers are clear.

As a sort of addendum, I can say that the CPU traces for these imports look identical, irrespective of where the files are located. This is the case where the files were located on the SSD 830 and the Catalogue and ACR cache were on the Seagate HDD:

http://farm4.staticflickr.com/3813/9057152294_ed38ffcb7e_c.jpg

And the corresponding filesystem activity. Again, the numbers are similar for all tests, just the drive letters change:

http://farm8.staticflickr.com/7352/9057152502_8a095e4334_c.jpg


Test 2 – Library module – 1:1 render of one large image

This is a repeat of the test done earlier in the Library Module testing: zooming in on a large file and seeing how fast LR can render it (i.e. when it goes “sharp” at 1:1 view), deleting the previews cache and repeating.

I have had to make a slight modification to the test though, as when looking at the file access traces I noticed that LR was not reading anything from disk during the testing – i.e. it was caching the raw data in RAM and rendering from there. In order to make the test work it is thus necessary to clear the preview cache AND restart LR between tests.

You can see clearly in the file access traces:

Caching in RAM:

http://farm3.staticflickr.com/2853/9055238039_f9ff0c6a0a_c.jpg

Loading from disk:

http://farm8.staticflickr.com/7445/9055290809_f79d56bbd9_c.jpg

I’ve given it quite some thought and I don’t think this finding affects the results in the earlier Library module testing, as those tests were performed quite consistently and were focused on CPU performance rather than disk performance. Indeed, by caching in RAM the test perhaps even better isolates the CPU performance metric.

I’m presenting the results in the same matrix as before. Note that because of the changed methodology the numbers are not comparable with the numbers from the Library mode testing in the earlier thread (you’ll note it takes longer here). CPU is set to run 4 cores at 1.6Ghz.

http://farm3.staticflickr.com/2822/9057685472_1a33bc93e0_o.jpg

It became pretty obvious that there was going to be nothing to see here so I didn’t bother filling the test matrix. In a sense we should really see the same general result as for Test 1, as we are doing pretty much the same thing – rendering a 1:1 preview. The only differences are that the image here is not a RAW (and so ACR cache is not touched) and the image is very large (which gives all 4 cores time to spin up fully). Anyway, again we see no benefit for SSD’s.

Testing in Develop Module

The big question – what is different in the develop module that would be pertinent to the SSD vs HDD debate? Not much I suspected, as we have seen that LR is caching the big pano image in RAM. Still that was a TIF not a RAW, and it is conceivable that the two are treated differently.

So I checked, and an interesting thing occurs – the Develop module appears to ignore the already generated previews, or at least generates some new data to go with them.

So this is the file system trace immediately after generating previews and zooming on 3 raw files:

http://farm6.staticflickr.com/5464/9055771365_474d7639e3_c.jpg

You can see each file is loaded from disk rather than from Previews (which is in the catalogue directory at the bottom), and some data is written to the ACR cache also.

If I then return to those 3 images and start to edit them in the Develop module (without restarting LR or purging the cache), you can see there is almost no further file access – nothing from the raw files, ACR Cache and only a little from the previews directory.

http://farm4.staticflickr.com/3806/9057994306_a7f891cdc5_c.jpg

So LR *is* caching images in RAM while you edit them, though how many images it will cache I don’t know.

Based on that I’ve chosen not to test in the develop module, as I think the Library module testing is more than representative: The rendering of previews almost certainly uses the same code as the Develop module (as it includes any edits you have done), and in the Develop module files are cached after initial load, so the impact of the disk subsystem is going to negligible after the first click.

I would also argue that Exporting is most likely going to show the same results as importing, so I’m going to be a little lazy and not bother to test that either.

Conclusions

I think you’ve pretty much guessed it – within the scope of this testing at least, SSD’s offer no real benefit for LR, and I must conclude that LR is limited primarily by CPU horsepower and not disk I/O. I admit i expected that as a general result, but i am surprised to see *no* measurable benefit for having a faster disk subsystem. Still, i can't argue with the numbers. I would note that the results suggest faster memory would also be beneficial, but without testing it I can’t say how great the impact would be: Intel CPU’s have extremely good pre-fetching algorithms and large caches nowadays, so i wouldn’t expect too much from superfast RAM.

I do have to be honest and throw in the caveat that this testing was done with a small catalogue that doesn’t have keywords etc in it. So it might well be that with a really large catalogue there is some advantage to a low latency, fast disk subsystem. At startup, for instance, quite a lot of the database may be read into memory (a quick test shows 60MB of my 600MB full lrcat being read on startup). However, in general editing there is not much traffic to and from the database file, so I am sceptical that an SSD would help much even for bigger catalogues unless they have become horrifically fragmented, and then the better solution would be to defrag them. If someone wants to test it though, I am quite happy to be proven wrong….



Edited on Jun 16, 2013 at 09:14 AM · View previous versions



Jun 16, 2013 at 08:24 AM
Alan321
Offline
• • • • • •
Upload & Sell: Off
p.1 #2 · p.1 #2 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Well, I think something is wrong here because in real life I have found that Lr on an SSD-based system works so much faster and smoother than on an HDD-based system. That isn't just for editing a single image, but browsing and tweaking many images in quick succession. Simply looking at an image in the Develop module loads it again from the drive, either the partly-converted data in the ACR cache or from the raw files themselves. Most drive access is a single-cpu-core activity.

Have you isolated how much benefit the operating system offers through caching ? It may not all be down to Lr.

Have you considered using a bigger or additional ram drive to simulate an 8GB or 4GB system trying to use Lr ? Even if the program does not need to use virtual RAM on a drive, the OS might have to or else reduce it's ability to cache file reads and writes.

Have you tried applying sharpening and colour NR to the images before testing ? These have a significant and visible impact on Lr performance. I think most of it is in the processing burden but that seems to continue for several seconds even after an image has been displayed, and there may also be some residual processing load from the previous image(s) if you go through them fast enough.

How can there be only a very few seconds difference between using an HDD and an SSD when there are multiple GB of data being read and when the SSD is reading and writing several times faster than the HDD - or even much faster than that for random data access ?
Another related factor is whether or not the HDD is full, because HDD data transfers slow down a lot as they fill up, whereas an SSD does not.

Your testing offers food for thought but the results don't ring true. I cannot, however, say what specifically is being done wrong to account for it.

cheers,
- Alan

PS Please use "B" for Bytes and "b" for bits so that drive transfer rates in B/s won't be confused with comms or interface transfer rates in b/s. It's a pity there is no well defined standard for this.



Jun 16, 2013 at 09:09 AM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #3 · p.1 #3 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Alan321 wrote:
PS Please use "B" for Bytes and "b" for bits so that drive transfer rates in B/s won't be confused with comms or interface transfer rates in b/s. It's a pity there is no well defined standard for this.

Thanks - i seem to have got into the bad habit of incorrect usage of "B" and "b". Should be fixed now.

Well, I think something is wrong here because in real life I have found that Lr on an SSD-based system works so much faster and smoother than on an HDD-based system.

Well i've certainly found that an SSD based system is generally faster to use in all respects, simply as the OS clips along a lot more smartly. That general snappiness will also translate to LR too. The test here was really evaluating whether it is worth hosting your images and LR catalogue on an SSD over a spinning HDD. Unfortunately, without reinstalling Win7 on another disk i can't determine the impact of having the OS on a spinning HDD.

Have you isolated how much benefit the operating system offers through caching ? It may not all be down to Lr.

Specifically no i haven't. That would really require rebooting for every test, which is quite a time overhead. I instead made the assumption that 2.8Gb of images was enough that the OS (and LR) wouldn't be doing much caching. I note that the different tests do read and write from different disks, and the numbers are reproducible if i go back and check different configurations of files on drives. Also, the process monitor tool is quite explicit about what is being read and written to disk and i don't think it is easy to fool.

Have you considered using a bigger or additional ram drive to simulate an 8GB or 4GB system trying to use Lr ?

In truth i'm a little wary of doing too much with the RAMDISK - whilst it is very fast, i am also running the OS from RAM, my onboard graphics uses system RAM, and all the LR processing is going on in RAM too, so the real bandwidth and latency of the RAMDISK is perhaps questionable when the system is under heavy load.

Have you tried applying sharpening and colour NR to the images before testing ?

The large panorama image used was actually edited, and had some of pretty much every slider applied. And yes, with edits included it is much slower. Thats why i included them in the original Library module testing . As you say though, their effect is entirely on the CPU load, not the disk. This makes sense when you think about it - LR renders everything on the fly, using the values of each slider setting. Those values are simply numbers, stored either as text in an .xmp file or in the LR catalogue. In neither case can the settings be classed as a large chunk of data to be read from disk.

The main directory of 200 files did not include edits though, as i wanted to keep the CPU overhead to a minimum so as to emphasise the disk subsystem performance.

..and there may also be some residual processing load from the previous image(s) if you go through them fast enough.

Something i noticed, though didn't look closely into - if you unzoom from an image that is rendering in the Library module, or move to the next image before it has rendered, then the rendering appears to get cancelled. I suspect this behaviour is to minimise exactly that phenomenon.

How can there be only a very few seconds difference between using an HDD and an SSD when there are multiple GB of data being read and when the SSD is reading and writing several times faster than the HDD...

That is the big question, isn't it? I don't really know in all honesty. The best explanation i have is that the CPU-heavy rendering part of image importing and editing takes so much longer than the data access that the speed of the disk subsystem is effectively masked. And of course for the Develop module images are cached in RAM with very little disk access going on.

Another related factor is whether or not the HDD is full, because HDD data transfers slow down a lot as they fill up, whereas an SSD does not.

True, but my drives at least are not full.

Your testing offers food for thought but the results don't ring true. I cannot, however, say what specifically is being done wrong to account for it.

I quite understand - i also wonder at the results. Sometimes though, common knowledge/collective wisdom or whatever you want to call it, is wrong. That's why i do this testing - to check if i can properly justify what i think i am seeing. In this case, i can't find any evidence that LR benefits from SSD's, within the constraint that i boot my OS from one.

I know that the results are going to be somewhat controversial. And i expect many people won't believe them. Everyone is free to go away and prove me wrong though, and if someone can do that i am quite happy to accept the results and try to find where my testing is lacking. Frankly, such research can only benefit us all in terms of understanding and i completely support people double checking my findings.



Jun 16, 2013 at 09:49 AM
tived
Offline
• • •
Upload & Sell: Off
p.1 #4 · p.1 #4 · Lightroom 5 Performance Testing: Pt.2 - SSD's


15bit,

what are you using to make your tests with? how are you testing, what verifies what you are testing? and if you repeat the tests are the results consistent?

thanks for making the effort - I will wait to comment on your results till I can confirm what you are stating.

All the best

Henrik



Jun 16, 2013 at 11:21 AM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #5 · p.1 #5 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Henrik,

The details of the testing are in the text, so i don't quite understand some of your questions.

With respect to repetion, yes the results are reproducible. Indeed almost every test was repeated 2 or 3 times and the numbers were very consistent. For the 4 minute imports for example i would estimate an error of +- 5secs.




Jun 16, 2013 at 11:32 AM
Alan321
Offline
• • • • • •
Upload & Sell: Off
p.1 #6 · p.1 #6 · Lightroom 5 Performance Testing: Pt.2 - SSD's


When I mentioned using a bigger RAM disk it was with the intention of starving the system of RAM for non-RAM-disk tests. i.e. so that it would simulate a system which had far less than 16GB of RAM installed and which would presumably run out of cache sooner.

You mentioned that the Develop module changes are cached in RAM and I know that it doesn't necessarily apply to Library module tests, but I don't think that the caching applies to bigger libraries. If it did then Lr would eventually use all available RAM and yet I observe that it does not, implying that perhaps the OS is doing the caching. Perhaps Lr keeps second and subsequent edits in RAM for a short while but the first time it uses a file it will read from the ACR cache and finish converting from there, or else it will read from the actual raw files and convert from scratch. After that, every time I do something in the way of a tweak Lr will spend a couple of seconds being busy even after the screen seems to show the results; I assumed that it was re-writing to the cache files too. Unfortunately the Mac will not show me when the drives are being accessed.

Another thing is that when Lr needs to display an image preview it has two basic options: one is to re-process an existing preview if it is roughly the right size already, and the other is to generate a new preview from scratch. Presumably the latter also involves some use of the full Develop module re-processing of the image to generate an appropriate preview based on the latest file data, which also updates the ACR cache and Lr Preview files in the process. I don't know how Lr decides which way to jump.

I know that testing such as you have done is a PITA but I wonder if you would consider using the same 200-image set for another test, but include far more images in the catalog. e.g. you might copy your own catalog and previews and files to a new location and then use that catalog for the tests. Perhaps there is more file-related database work involved when the library is much larger.

It might also be that if your test involved some processing on each image as it was imported (e.g. a preset for sharpening or NR or rotate 1 degree) then it might reveal that Lr does a lot of disk activity between tweaks. A variation on this theme is to do a bundle of things with every file after they have been imported, restart Lr, and then see how fast it can do one more thing to every file. Perhaps having a bundle of things to do will be more revealing than simply importing the files, and this can still all be done in the Library module for one test and in the Develop module for a separate test.

Maybe with such tests you will better measure the real impact of using speedy storage. I hope so, because I'm sure it is not just imagination that Lr flies on an all-SSD system.


I still wonder whether or not Lr is clever enough to convert several exposure tweaks into a single equivalent exposure tweak for faster processing, or else simply do them all in sequence. That is something else that can be revealed by showing how much slower adding one more tweak may or may not be. e.g. import the files and close Lr. Re-open and select the files in Library module and change the exposure by 0.1 stop. Time results. Now increment several times by 0.1 stop at a time without timing it. Close and re-open Lr and select the files again and time how long it takes for one more exposure change. If it takes significantly longer then you'd know it was applying more than one change.

Then of course you could go further and do a similar test with different types of tweaks, say exposure, sharpness, saturation, contrast, etc.

There now. Off you go. Shouldn't take you too long
Actually, it is quite an onerous workload.

And now for one more spanner in the works - When I tell Lr to rebuild its previews it provides a progress bar. I'm pretty sure that long after that bar stops Lr is still updating the thumbnails on the screen. That sort of thing makes it much harder to accurately judge when Lr has really finished doing what it has to do.

- Alan



Jun 16, 2013 at 12:35 PM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #7 · p.1 #7 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Alan,

The RAMDISK idea is a clever one. I hadn't thought about doing that. I'm unsure of its relevance though, as we all have 8Gb or more RAM nowadays. Also, it might be tricky to get the correct level of RAM reduction that impacts file system caching without also crippling the system in other ways.

Caching into RAM in the Develop module seems to be very dynamic. I would say from memory i have never seen LR take more than 2Gb RAM in normal use. I did just have a play in my panorama folder, clicking between 400-800Mb tif's, and i managed to briefly get usage up to 3.2Gb. But i mean briefly - LR is very active about uncaching files from RAM. So in terms of the user experience, if you are clicking backward and forward a lot between images in the Develop module then a lot of rebuilding is probably going on, with a fair bit of file system access.

In terms of preview rebuilds, i think what happens differs between modules. In Develop it needs to continually rebuild the image as you edit and when you have finished an up to date preview is presumably left in the preview cache. In Library it just pulls up whatever is in the preview cache and if nothing is there it generates one following whatever develop settings are present.

I don't see how there would be any more file access going on with a large database than a small one - we have established that in Develop the caching is aggressively dynamic, and in Library the preview cache size is dictated by the user. What might be different is slower access of the actual database itself, which would be interesting. However, none of the actions above resulted in a lot of data being written to the database, or read from it, so i think the lrcat file would need to be enormous to get any decent testing done. I only have 600MB at present...

Applying an import preset is an interesting idea, but i think it might be actually counter productive as it will give the CPU more work to do and thus further mask the disk subsystem performance. What i actually need is files which process extremely fast. Hmm, thats an idea - make up a directory of large all-black image files. That would stress the disk but not the cpu. Not exactly representative of real use though....

If you want to know about the serial tweaks, try reading a .xmp sidecar file. LR simply applies whatever is there, so the way it is laid out will tell you what LR is doing.

Yes, i also wonder about the process bar.

Something to consider in our thinking on this - The RAW files i am testing are around 11-12MB in size and preview rendering takes around 1.2 secs (240secs import time divided by 200). My slowest disk reads at 60MB/sec meaning that it can load each file in less than 0.2sec. This means that the disk read really is only a fraction of the overall rendering process. Also, if you can credit the software programmers with having written better than a plain dumb loading algorithm, the files are quite likely pre-fetched into memory whilst the previous image is being processed. In this case the disk subsystem would need to be slower than the image render time for the import times to be affected.

Another test to throw into the mix. A completely fresh catalogue and a 5Gb folder of the copies of the biggest files i have (the stitched panos). For the CPU and I/O plots everything is housed on the spinning HDD:

http://farm4.staticflickr.com/3771/9058712147_00c9c917ec_b.jpg

Now you see the I/O spikes? They peak at over 400MB/sec, which is plainly impossible for my spinning hard disk, so something interesting is going on. The process monitor is clear that "something" does not involve writing data to my C-Drive (it logs all file processes, even to the pagefile), but may involve the system caching in RAM which i don't know how to evaluate/monitor. However, for our purposes here the exact mechanics are irrelevant and what this shows is that disk I/O is sufficiently pre-fetched/cached/optimised that LR is never waiting for the files to be read. And hence feels very little impact from SSD's.



Jun 16, 2013 at 01:12 PM
OntheRez
Offline
• • • • •
Upload & Sell: On
p.1 #8 · p.1 #8 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Fascinating. The no significant difference finding based upon drive type is surprising. I'm just thinking out loud here, but a couple of things come to mind. First, if I understand your set up correctly LR is housed on an SSD. If so, then it is already getting most of the bump the drive would provide overall in any disk based process. The second thought focuses on your conjecture that the time/power necessary to render the image is several orders of magnitude greater than the time required to fetch data thus "hiding" any benefit or effect of drive type.

From what I know about the maths involved in rendering (which frankly would easily fit in a thimble with room left over suggests that the processors are applying successive calculations to the data in which a subsequent calculation is dependent upon the results of the first. If this is so then the need for processor speed becomes paramount which is what showed up in your first tests. Again, my math never really went that far so someone who actually knows what they're talking about could step in here. It also would seem to support my earlier bashing of Adobe's programmers for not making truly effective use of multiple cores so the data can be spilt across the cores thus allowing more of the calculations to happen simultaneously.

Another possible answer may be that we simply now have available to us enough RAM (I have 32GB) that very few processes ever need to refer or refresh to storage except to write the end result. I haven't really thought this thru, but I'm wondering given that "the price of RAM is approaching the price of sand" as was said in the old days, that we have unbalanced systems where either the processor technology or more likely our programming skill can't effectively utilize all the RAM available. I do know from experience that the ability to program for a processor lags well behind the ability of chip makers to ramp them up.

All in all very interesting and at least for me causes me to re-evaluate if I'm going to move to a SSD based main drive.

Again, thanks for the excellent work.

Robert



Jun 16, 2013 at 06:47 PM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #9 · p.1 #9 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Robert,

The LR executable is indeed installed on an SSD (C-Drive), meaning that it should see a speed bump when it initially loads. The catalogue and images files were moved around different drives during testing.

I actually think Adobe have done a decent job of programming - you can clearly see that several cores can be applied to the processing of a single image, not just one core per image. That takes work to program. The real cause of slowness is simply that there is very sophisticated image rendering going on in real-time with packages like LR. (I would point out that Capture One 7 does not perform all that differently). I think that if we want to extract the very best image quality from our RAW's using software like LR that renders on the fly rather than producing an extra output file (like PS does), we have to accept some performance issues.

You are correct about the RAM - i paid extra to go from 16 to 32MB in my first Pentium PC, this time around i decided it wasn't worth the extra to go from 16 to 32GB...

My recommendation would be to go with an SSD main boot drive. The OS really does run faster on them. However, once you have that, there seems to be no need for you to host your LR catalogue and images on it - they can stay on traditional spinning media and run just as fast.



Jun 17, 2013 at 12:29 AM
tived
Offline
• • •
Upload & Sell: Off
p.1 #10 · p.1 #10 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Thanks 15bit,

got to stop reading this forum when tired! Sorry missed that bit

15Bit wrote:
Henrik,

The details of the testing are in the text, so i don't quite understand some of your questions.

With respect to repetion, yes the results are reproducible. Indeed almost every test was repeated 2 or 3 times and the numbers were very consistent. For the 4 minute imports for example i would estimate an error of +- 5secs.





Jun 17, 2013 at 05:15 AM
Ho1972
Offline
• • • •
Upload & Sell: Off
p.1 #11 · p.1 #11 · Lightroom 5 Performance Testing: Pt.2 - SSD's


15Bit wrote:
Well i've certainly found that an SSD based system is generally faster to use in all respects, simply as the OS clips along a lot more smartly. That general snappiness will also translate to LR too. The test here was really evaluating whether it is worth hosting your images and LR catalogue on an SSD over a spinning HDD. Unfortunately, without reinstalling Win7 on another disk i can't determine the impact of having the OS on a spinning HDD.


I doubt LR performance is much affected by the type of disk it's installed on. I tested Photoshop installed on SSD and Velociraptor and my benchmark / real world results were not significantly different. Sure, everything loads faster with the SSD, at least on first boot and first app load, but thereafter even that advantage tends to be minimized by the Windows caching algorithms. Maybe there’s more I/O going on with LR than Photoshop, but my guess is that it’s more processor bound than anything else.



Jun 17, 2013 at 09:15 AM
OntheRez
Offline
• • • • •
Upload & Sell: On
p.1 #12 · p.1 #12 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Ho1972 wrote:
I doubt LR performance is much affected by the type of disk it's installed on. I tested Photoshop installed on SSD and Velociraptor and my benchmark / real world results were not significantly different. Sure, everything loads faster with the SSD, at least on first boot and first app load, but thereafter even that advantage tends to be minimized by the Windows caching algorithms. Maybe there’s more I/O going on with LR than Photoshop, but my guess is that it’s more processor bound than anything else.


So for me the take away on this is that it really is all about processor speed and (to a lesser extent) number of cores. 15Bit I concede that Adobe does utilize multiple cores, but I retain the suspicion that they really haven't mastered job division, pre-fetch, task passing between cores, etc. Just a suspicion as I haven't a shard of data and no way to test for it.

Higher processor speed in the Mac world is hard. (I run a dual quad 2.4GHz 2010 MacPro.) So over clocking and processor swapping are difficult if not impossible to do, and to be honest this rig is completely competent. Haven't a clue if UNIX variant OS' do better caching algorithms. Apple's gear has its pluses and minuses but for sure they seriously don't want anyone mucking with core components. Only way I know to get higher speed processors is to go to a later iteration (read whole new machine) though now those of who use the MacPro are wondering what the new "MacPipe" represents in terms of sheer power.

In my newspaper work (I'm the sports reporter/photographer) - it's a very small town paper - probably 95% of my submissions come direct out of LR. PS gets used heavily for non-paper work. Given that I live on sometimes real tight deadlines, watching the SPOD (spinning pizza of death for those of you not in the Mac world) whirl away while switching modules is frustrating.

I suppose it goes back to a saying from the early days of computing, "No computer is every fast enough or small enough." I did sometime back almost beat the first half of this as I had access to a then current model Cray on a holiday weekend. It was a reasonably large data set: about 2 million cases with 105 variables each. Once loaded, it was indeed close to "fast enough". Of course it utterly failed the "small enough" dictum

As always looking for a better way. Thanks to all who have contributed to this thread. I've learned useful things from each of you.

Robert



Jun 17, 2013 at 10:00 AM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #13 · p.1 #13 · Lightroom 5 Performance Testing: Pt.2 - SSD's


OntheRez wrote:
So for me the take away on this is that it really is all about processor speed and (to a lesser extent) number of cores.

I think that's the headline message. Indeed it seems so clear at this point that i won't rush with the Pt.3 to this series looking at the Develop module. It will now wait 3 weeks whilst i am on vacation.

I concede that Adobe does utilize multiple cores, but I retain the suspicion that they really haven't mastered job division, pre-fetch, task passing between cores, etc.

No-one has. In a previous life i also used supercomputers for work and the CPU scaling of any sort of job that involved communication between cores (i.e. several cores working a single dataset) was so far from linear it was hilarious. I actually give Adobe quite a lot of credit here as i think they've done a fairly decent job. I just wonder if there are any gains to be had when importing large numbers of images, perhaps by intelligently assessing the incoming work and spinning out an optimum number of cores per image based on file size.

Higher processor speed in the Mac world is hard. ...... though now those of who use the MacPro are wondering what the new "MacPipe" represents in terms of sheer power.

I tend to avoid Apple stuff partly for these reasons. My feeling is that the MacPipe Pro will be a spectacularly fast machine, if you can afford to buy the model with the top end processor(s) and you have workload that scales well across multiple cores. So the engineering, video and rendering folk will probably love it. I am not convinced that it will be a great LR machine though (at least not in performance per dollar terms), as clockspeed on these big core count Xeons tends to be low, and LR really likes Mhz. I strongly suspect 12 cores at ~3Ghz will not shade a heavily overclocked 6-core i7 running at 4.3Ghz, which can be had for a fraction of the price. Of course you will have to desert Apple and buy PC for that...



Jun 17, 2013 at 10:46 AM
aubsxc
Offline
• • •
Upload & Sell: Off
p.1 #14 · p.1 #14 · Lightroom 5 Performance Testing: Pt.2 - SSD's


I am not convinced that it will be a great LR machine though (at least not in performance per dollar terms), as clockspeed on these big core count Xeons tends to be low, and LR really likes Mhz. I strongly suspect 12 cores at ~3Ghz will not shade a heavily overclocked 6-core i7 running at 4.3Ghz, which can be had for a fraction of the price. Of course you will have to desert Apple and buy PC for that...

I am guessing more like 2.5GHz turbo speeds with the 12-core Xeons to fit into any sort of server rack solution. A 3770K modestly clocked at 4.2GHz will likely runs rings around the 12-C Xeon in PS/LR work for less than a tenth of the cost of the Xeon.


Only way I know to get higher speed processors is to go to a later iteration

Or build your own/buy a custom build and overclock to 4.5GHz, which pretty much any modern Intel quad will run (well, perhaps not Haswell).



Jun 17, 2013 at 11:26 AM
Alan321
Offline
• • • • • •
Upload & Sell: Off
p.1 #15 · p.1 #15 · Lightroom 5 Performance Testing: Pt.2 - SSD's


I wonder whether the apparent discrepancy between these test results and my perception of ssd benefit is related to the sequence in which files are being accessed.

For example, if I am using a filter or sort order to display a selection of non-contiguous images (i.e. not stored next to each other) in a sequence that doesn't match the way they are stored then a hdd could have to do a lot of head movements that it may not be doing in your tests. The OS and drive disk caches may be rendered useless in that case but the ssd will still be very speedy.

Add to that a situation in which I have standard size previews in the Lr preview cache and I'm flitting through images at full size in the library module, then Lr will have to fully or partially reload each image to create and store a full size preview - and do so in my sequence rather than the stored sequence. An ssd with the images and catalog and previews should be much quicker than an hdd, as is the case in my experience.

This sort of usage of Lr is common for me.

- Alan



Aug 26, 2013 at 11:26 PM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #16 · p.1 #16 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Alan321 wrote:
I wonder whether the apparent discrepancy between these test results and my perception of ssd benefit is related to the sequence in which files are being accessed.

I think that would only really be the case if you were saving your files to a hard drive that is quite full up - raw image files tend to be saved quite contiguously because once they are saved on disk they are not written to again, only read from. So there is little chance for them to fragment. That you are flitting from image to image fairly randomly is unlikely to have a big performance impact if the images themselves are contiguous.

What is the hardware config on your PC? CPU. RAM, HDD etc?



Aug 27, 2013 at 02:40 PM
Alan321
Offline
• • • • • •
Upload & Sell: Off
p.1 #17 · p.1 #17 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Alan321 wrote:
I wonder whether the apparent discrepancy between these test results and my perception of ssd benefit is related to the sequence in which files are being accessed.

15Bit wrote:
I think that would only really be the case if you were saving your files to a hard drive that is quite full up - raw image files tend to be saved quite contiguously because once they are saved on disk they are not written to again, only read from. So there is little chance for them to fragment. That you are flitting from image to image fairly randomly is unlikely to have a big performance impact if the images themselves are contiguous.

What is the hardware config on your PC? CPU. RAM, HDD etc?


As you say, the raw files would remain as contiguous as ever but images are collected over a long time and they will not all be contiguous unless you periodically copy the library to a new drive. Even if they were, our need to access them would be non-contiguous because we are looking at them in a sequence that is out of their stored order. Bigger libraries would be more likely to suffer from this issue but it would certainly not need the hard drive to be full or even close to full - just so long as the library is big enough to occupy multiple hdd data tracks and sufficient of the images are not in the ACR cache. It's also trivially easy for the library to to be far bigger than the OS disc cache.


My computer is an early 2011 MacBook Pro 17" with 2.3GHz quad core i7, 16GB 1333MHz RAM, two 480GB SSDs as a RAID 0 striped pair. External storage for backups is via a thunderbolt interface.

My Lr catalog size is about 900MB with about 40k images occupying about 410GB of storage. I have just increased my preview cache to about 180GB with 1:1 previews of many images. With everything on speedy ssd there is little need for me to economise on 1:1 previews. My ACR cache was 20GB but I've just increased the limit to 50GB. I'll have to wait and see how much that helps. Even on a big hdd with no other data, that big an image collection and preview cache and ACR cache would spread over a significant number of tracks and access would have to be much slower than on an SSD.

I could not get 960GB of ssd storage inside a new retina MBP and so I am reluctant to "upgrade" until I have at least figured out how to utilse the smart previews feature and manage a main catalog on external storage plus a smaller catalog on the laptop. I don't want the laptop tied to a power point just to use external storage. For now everything fits inboard and I like that convenience. If I ever need a bit more space for adding new images on a busy holiday then I can shrink one or both caches.

- Alan



Aug 28, 2013 at 08:02 AM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #18 · p.1 #18 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Presumably your "slow" HDD comparison was the same setup but with a HDD in place of the SSD's?


Aug 29, 2013 at 12:24 AM
Alan321
Offline
• • • • • •
Upload & Sell: Off
p.1 #19 · p.1 #19 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Not quite. I was comparing using a single ssd with using a single hdd or with using an hdd and an ssd.

The most important factor was my very clear perception of much faster browsing through images in both the Library and Develop modules when I used an ssd. I never quantified it with tests such as yours, but then the way I use Lr is not the way your tests use it. The difference was clear enough that going back to hdd always seemed to be painfully slow. I had to do that on occasions because an ssd failed or was not big enough.

- Alan



Aug 29, 2013 at 04:12 AM
15Bit
Offline
• • • • •
Upload & Sell: Off
p.1 #20 · p.1 #20 · Lightroom 5 Performance Testing: Pt.2 - SSD's


Perhaps the question would be better phrased as "How was the spinning HDD connected to the MacBook Pro?". I'm just checking you are comparing apples to apples, so to speak.

You have a good point with the perception vs quantification. The problem is that perception is very hard to quantify. Still, i don't notice any perceptive differences between HDD and SSD on my system. In part, this is what prompted me to do the testing.



Aug 29, 2013 at 05:36 AM
1
       2       end




FM Forums | Post-processing & Printing | Join Upload & Sell

1
       2       end
    
 

You are not logged in. Login or Register

Username       Or Reset password



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.