Home · Register · Join Upload & Sell

Moderated by: Fred Miranda
Username  

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist
  

FM Forums | General Gear-talk | Join Upload & Sell

       2       end
  

MacBook Pro with M3 chip coming soon?

  
 
Rajan Parrikar
Offline
• • • • •
Upload & Sell: On
p.1 #1 · p.1 #1 · MacBook Pro with M3 chip coming soon?


Apple's upcoming Oct 30 event seems to suggest so.

https://twitter.com/markgurman/status/1716849098333077844



Oct 25, 2023 at 06:41 AM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #2 · p.1 #2 · MacBook Pro with M3 chip coming soon?


It would appear to be the case. My wallet is waiting.


Oct 28, 2023 at 09:42 AM
Rajan Parrikar
Offline
• • • • •
Upload & Sell: On
p.1 #3 · p.1 #3 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
It would appear to be the case. My wallet is waiting.


Same.



Oct 28, 2023 at 01:24 PM
spiffy23
Offline

Upload & Sell: Off
p.1 #4 · p.1 #4 · MacBook Pro with M3 chip coming soon?


It would appear so, but keep in mind the memory bus on the M3's will be smaller than on M1 and M2 Pro. What impact will that have on performance? We'll just have to wait and see. I can tell you that the 16" MBP M2 Max is an absolute beast, and not in need of replacement any time soon.


Nov 02, 2023 at 01:37 PM
Rajan Parrikar
Offline
• • • • •
Upload & Sell: On
p.1 #5 · p.1 #5 · MacBook Pro with M3 chip coming soon?


spiffy23 wrote:
It would appear so, but keep in mind the memory bus on the M3's will be smaller than on M1 and M2 Pro. What impact will that have on performance? We'll just have to wait and see. I can tell you that the 16" MBP M2 Max is an absolute beast, and not in need of replacement any time soon.


Yes, true. The upgrade makes sense to someone like me who is still on an Intel-based MacBook Pro.



Nov 02, 2023 at 03:17 PM
voltaire
Offline
• • • •
Upload & Sell: On
p.1 #6 · p.1 #6 · MacBook Pro with M3 chip coming soon?


We’re on the same boat, Rajan. Have you decided yet?

Rajan Parrikar wrote:
Yes, true. The upgrade makes sense to someone like me who is still on an Intel-based MacBook Pro.




Nov 03, 2023 at 01:12 PM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #7 · p.1 #7 · MacBook Pro with M3 chip coming soon?


spiffy23 wrote:
It would appear so, but keep in mind the memory bus on the M3's will be smaller than on M1 and M2 Pro. What impact will that have on performance? We'll just have to wait and see. I can tell you that the 16" MBP M2 Max is an absolute beast, and not in need of replacement any time soon.


For all intents and purposes, I expect zero impact noticeable to the end user in 99% of the use cases. Keep in mind it's massively higher than what is found in the Intel/AMD world. Then you've got the fact that even with intensive stress-testing the nerds at Anandtech couldn't come close to saturating the memory bus.

I actually ordered up an M3 Max so we'll get some direct comparison numbers in a week or so that I will once again post here.




Nov 03, 2023 at 01:43 PM
Rajan Parrikar
Offline
• • • • •
Upload & Sell: On
p.1 #8 · p.1 #8 · MacBook Pro with M3 chip coming soon?


voltaire wrote:
We’re on the same boat, Rajan. Have you decided yet?



Yes, more or less, but it will be mid to late December when I'll be placing my order. I'm likely to settle on M3 Max with 96GB of RAM and 4TB SSD. This machine will be more powerful than my current desktop iMac Pro 2017, which I expect will be replaced at some point by Mac Studio (M4?).





Nov 03, 2023 at 03:19 PM
CanadaMark
Offline
• • • • •
Upload & Sell: Off
p.1 #9 · p.1 #9 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
For all intents and purposes, I expect zero impact noticeable to the end user in 99% of the use cases. Keep in mind it's massively higher than what is found in the Intel/AMD world. Then you've got the fact that even with intensive stress-testing the nerds at Anandtech couldn't come close to saturating the memory bus.

I actually ordered up an M3 Max so we'll get some direct comparison numbers in a week or so that I will once again post here.



The memory bandwidth on Apple silicon is there for GPU performance, not CPU performance (doubling it does almost nothing for CPU performance, for example). Memory bandwidth is however extremely important for GPU performance, which is why Apple has taken the unified memory approach and also why Nvidia is using GDDR6X with over 1.0 TB/s memory bandwidth in their best cards. The memory bandwidth on the M3 CPU is nowhere near what is available with Nvidia GPUs, which is what you would pair with an Intel/AMD CPU if you were building a comparable Windows machine.

With the M3 Pro dropping down to 150GB/s that is roughly 7 times less than what high-end Nvidia cards are offering and about half that of the entry level Nvidia GPUs like the 4060.

I'm not saying it's bad, just adding clarity on how it needs to be compared if you are looking at Windows machines.



Nov 03, 2023 at 03:40 PM
RustyBug
Offline
• • • • • • •
Upload & Sell: On
p.1 #10 · p.1 #10 · MacBook Pro with M3 chip coming soon?


CanadaMark wrote:
The memory bandwidth on Apple silicon is there for GPU performance, not CPU performance (doubling it does almost nothing for CPU performance, for example). Memory bandwidth is however extremely important for GPU performance, which is why Apple has taken the unified memory approach and also why Nvidia is using GDDR6X with over 1.0 TB/s memory bandwidth in their best cards. The memory bandwidth on the M3 CPU is nowhere near what is available with Nvidia GPUs, which is what you would pair with an Intel/AMD CPU if you were building a comparable Windows machine.

With the M3 Pro dropping down to
...Show more

For comparison, the M2 chips in a MBP, with a Pro configuration run 200GB/s, and Max configuration run 400GB/s ... and have a very low heat signature. With the greater performance GPU's of a high end Nvidia, there is also an accompanying greater TDP / heat signature requirement.

That's not to say GPU performance can't be higher in a Windows Nvidia rig ... but it does require more power / more heat to do so. In a laptop, and particularly if used on the lap, the Nvidia's can get a bit roasty / toasty. Best I can tell, the MBP modular approach, builds out additional memory bandwidth, while it retains a very good performance / heat balance.

I have an M2 Max with 400GB/s ... can't speak to the Ultra configuration (or the M3). My Nvidia in my Thinkpad Extreme ... it just gets too hot to be comfortable to use under load (pano stitching / uprezzing).

While the Max does offer double the GPU cores of the Pro ... it was the increase in memory bandwidth from 200GB/s to 400 GB/s that I was most interested in. I opted for the 64GB (Max was a requirement), but even if I would have only gotten 32GB ... which could be supported in a Pro ... I still would have gotten the Max, for the 2X memory bandwidth over the Pro.

Which, btw ... I did in fact purchase a 32GB Pro and a 32GB Max, as well as a 64GB Max (one at a time, demo / return) in the M1 variants. For my uses, I didn't notice any real diff between 64GB Max vs. 32GB Max. But, I did notice some diff between 32GB Max (400GB/s) vs. 32GB Pro (200GB/s) for some heavier operations / tasks (pano stitching / uprezzing, etc.). For general use, nothing notable. Can't speak to video, I'm stills only.

My takeaway was that the 2X memory bandwidth can facilitate better responsiveness, even if the need for 2X memory amount isn't a requirement for a given task. And, given that the memory is shared between CPU and GPU operations ... all the more reason for getting the faster bandwidth (imo).

Which, btw ... when I got the M2 MBA, the memory was lower at 24GB, but it also came with a bandwidth of 100GB/s. The difference between that and the M1's was VERY noticeable. I really don't think it was the reduction from 32GB to 24GB, rather despite the % CPU increase in clock speed ... SLICING the memory bandwidth in half again (400 / 200 / 100) for 100GB/s in the base units is a VERY different responsiveness vs. 200 or 400. Imo, marginal % in clock speed pales in comparison to the 2X / 4X factor of memory bandwidth when asking for intensive operations. We don't always need the full capacity of say 64GB, but it strikes me that we can always benefit from the more responsive bandwidth (task dependent wrt to threshold of detection).

Imo, the bigger gains for sustained operations come from the bandwidth capacity, more than the clock speed increases. A bit early yet to see how the M3 plays out in this, but no matter what AMOUNT of memory one opts for, I'd pay attention to how much bandwidth rate they can process that memory at, too.




Nov 03, 2023 at 10:17 PM
 


Search in Used Dept. 

jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #11 · p.1 #11 · MacBook Pro with M3 chip coming soon?


CanadaMark wrote:
The memory bandwidth on Apple silicon is there for GPU performance, not CPU performance (doubling it does almost nothing for CPU performance, for example). Memory bandwidth is however extremely important for GPU performance, which is why Apple has taken the unified memory approach and also why Nvidia is using GDDR6X with over 1.0 TB/s memory bandwidth in their best cards. The memory bandwidth on the M3 CPU is nowhere near what is available with Nvidia GPUs, which is what you would pair with an Intel/AMD CPU if you were building a comparable Windows machine.

With the M3 Pro dropping down to
...Show more

You can't compare across the platforms though; to start with it's not *just* for GPU performance that Apple is using LPDDR5. On the PC side, it doesn't matter what Nvidia uses when there's bottlenecks between the CPU and main memory (theoretical max here is about 94Gb/s but in practice it ends up much lower) and the CPU and GPU having to communicate over the PCIx 4 bus at only 32Gb/s. The reality is that it's a complex calculus of the various bus speeds, the tasks at hand and the software itself and how well it's written/optimized.

One of the reasons that Apple Silicon performs so well is that many of these variables are flattened out with a unified memory architecture and an integrated stack from CPU manufacture to the OS all under one central control.

Apple isn't a dumb company. I have no doubt that if they changed this, it's not going to create a real-world performance difference that most will notice. As I mentioned in another thread, vigorous testing by the nerds over at Anandtech on the M1 generation showed that they just couldn't push the system hard enough to saturate the available memory bandwidth. As Apple continues to learn from how the M-series are used in the real world we will continue to see the SOC evolve and develop to reflect that.

We will know a lot more when these become available on Tuesday for testing, and the review embargoes are lifted. On the surface it does appear the M3 Pro is at best a minor move up over the previous M2 Pro, but that leaves out any gains on the GPU side from the addition of raytracing and the new GPU architecture with the dynamic caching.

I know Art at Art is Right will do his tests, and I have an M3 Max that is supposed to arrive on Tuesday that I will run through my normal battery of tests and post here. I am in a small minority of people who are pretty excited about the hardware ray tracing, as we do a lot of 3D rendering in my business. Right now I'm forced to use PCs to get that done, but most of the key vendors are now offering MacOS support and several have already announced support for Apple's raytracing on the M3. Exciting times ahead in that area!




Nov 04, 2023 at 09:33 AM
CanadaMark
Offline
• • • • •
Upload & Sell: Off
p.1 #12 · p.1 #12 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
You can't compare across the platforms though; to start with it's not *just* for GPU performance that Apple is using LPDDR5. On the PC side, it doesn't matter what Nvidia uses when there's bottlenecks between the CPU and main memory (theoretical max here is about 94Gb/s but in practice it ends up much lower) and the CPU and GPU having to communicate over the PCIx 4 bus at only 32Gb/s. The reality is that it's a complex calculus of the various bus speeds, the tasks at hand and the software itself and how well it's written/optimized.



Well, then why did you directly compare them in your previous post? (Your comment: "Keep in mind it's massively higher than what is found in the Intel/AMD world"), which is why I replied to add clarity on how they need to be compared if you are going to do that. The reason Apple's memory bandwidth is reasonably high is because the GPU is on the same silicon as the CPU - they have to do that because GPU memory bandwidth is a huge factor in GPU performance. If you are handling the graphics processing on a separate card altogether, you do not need anywhere near that kind of bandwidth on the CPU side. The memory bandwidth doesn't have much of an effect on CPU performance (look at single core CPU benchmarks if you want to check - doubling the memory bandwidth does almost nothing for the CPU performance) - as I mentioned before, in Apple's case, the high memory bandwidth is there specifically for GPU performance.

GPU bandwidth very much does matter and can be fully utilized on a Windows machine - why else would Nvidia have such drastic differences between GPU memory bandwidth as you progress through their lineup? It's not there for fun. There is a reason something like the RTX 4090 is so much more powerful (and expensive, and power hungry) than anything else on the market.

With respect, it doesn't sound like you understand how the GPU bandwidth is utilized on a Windows machine. GPU memory bandwidth has nothing to do with the PCIe link. Memory communications between the GPU core and the VRAM happen directly on the card, none of that goes over the PCI bus, which is only for communication between the GPU core and CPU and not even close to being a bottleneck for any process. You plug your monitor directly into the GPU and not the Motherboard because all the heavy lifting is being done directly on the GPU.



Nov 04, 2023 at 12:28 PM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #13 · p.1 #13 · MacBook Pro with M3 chip coming soon?


CanadaMark wrote:
Well, then why did you directly compare them in your previous post? (Your comment: "Keep in mind it's massively higher than what is found in the Intel/AMD world"), which is why I replied to add clarity on how they need to be compared if you are going to do that. The reason Apple's memory bandwidth is reasonably high is because the GPU is on the same silicon as the CPU - they have to do that because GPU memory bandwidth is a huge factor in GPU performance. If you are handling the graphics processing on a separate card altogether,
...Show more

Read what I wrote: It is. No matter what the GPU is doing, it can't communicate faster than the PCIe theoretical speed, which is lower than even the lowest-end speed on the M3. The speed on the GPU itself is only for tasks that are 100% GPU bound. So yes, I understand it, but you don't seem to understand that the GPU doesn't just exist in a vacuum. You can't just plug a monitor into a GPU without the rest of the computer there, and while individual tasks are written purely for the GPU, they don't run there with no interaction with rest of the entire system--things like the memory to CPU, CPU to GPU, storage to CPU all come into play when calculating how fast any given task is going to complete.

You can certainly speed up the GPU in isolation by increasing the speed of the memory, but it's only one part of the equation. That's why when you run software benchmarks on applications like Photoshop or Lightroom, there's a very minimal to no extra return when jumping between say an Nvidia 3090 or a 4090 GPU.

I have a whole group of PCs and Macs I've done comparative testing on; the only task a 4090 outperforms older series of Nvidia GPUs in Lightroom is the AI DeNoise. For everything else the differences are vanishingly small. That's quite logical when you realize that the GPU is only used to accelerate a small subset of tasks in Lightroom or Photoshop--and those are graphics-intensive applications. For the vast majority of regular applications the GPU is all but irrelevant.

Looking at the GPU memory bandwidth in isolation is just completely ignorant as to how most software works.







Nov 04, 2023 at 12:40 PM
CanadaMark
Offline
• • • • •
Upload & Sell: Off
p.1 #14 · p.1 #14 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
Read what I wrote: It is. No matter what the GPU is doing, it can't communicate faster than the PCIe theoretical speed, which is lower than even the lowest-end speed on the M3.


What you don't seem to be grasping is that it does not need to communicate faster - there is no bottleneck there and it isn't even close. The communications between the GPU core and the CPU that do use the PCI bus are an order of magnitude less than the communications between the GPU and the VRAM. If you're the only car on the highway, expanding the road from one lane to 100 lanes makes literally no difference.

jhapeman wrote:
The speed on the GPU itself is only for tasks that are 100% GPU bound. So yes, I understand it, but you don't seem to understand that the GPU doesn't just exist in a vacuum. You can't just plug a monitor into a GPU without the rest of the computer there, and while individual tasks are written purely for the GPU, they don't run there with no interaction with rest of the entire system--things like the memory to CPU, CPU to GPU, storage to CPU all come into play when calculating how fast any given task is going to complete.
...Show more

I did not say it existed in a vacuum. I was simply pointing out that the interactions between the GPU and the CPU do not work in the way you think they do.

jhapeman wrote:
You can certainly speed up the GPU in isolation by increasing the speed of the memory, but it's only one part of the equation. That's why when you run software benchmarks on applications like Photoshop or Lightroom, there's a very minimal to no extra return when jumping between say an Nvidia 3090 or a 4090 GPU.


That is because PS and LR are CPU heavy programs, and mostly single core. So, if you run a software benchmark to evaluate GPU performance on something that barely stresses the GPU, of course you aren't going to see must of a difference as you aren't really measuring GPU performance at all. Buying an extremely powerful GPU for Lightroom use is silly. If you compare a 3090 and 4090 on a GPU-heavy process such as a game or a render, you will see exactly how different they are.


jhapeman wrote:
I have a whole group of PCs and Macs I've done comparative testing on; the only task a 4090 outperforms older series of Nvidia GPUs in Lightroom is the AI DeNoise. For everything else the differences are vanishingly small. That's quite logical when you realize that the GPU is only used to accelerate a small subset of tasks in Lightroom or Photoshop--and those are graphics-intensive applications. For the vast majority of regular applications the GPU is all but irrelevant.


Lightroom and DeNoise are not hard on GPUs. As I mentioned, buying a 4090 or something like that would be silly. If you were someone who used Blender, Vray, or did heavy video editing, that is where you would see the value of a GPU like that.

These are programs that actually stress the GPU - does this look like a vanishingly small difference to you?























Nov 04, 2023 at 01:23 PM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #15 · p.1 #15 · MacBook Pro with M3 chip coming soon?


CanadaMark wrote:
What you don't seem to be grasping is that it does not need to communicate faster - there is no bottleneck there and it isn't even close. The communications between the GPU core and the CPU that do use the PCI bus are an order of magnitude less than the communications between the GPU and the VRAM. If you're the only car on the highway, expanding the road from one lane to 100 lanes makes literally no difference.

I did not say it existed in a vacuum. I was simply pointing out that the interactions between the GPU and the CPU
...Show more

Cherry-picking application scores that focus *only* on GPU performance does absolutely nothing to prove your point and there are numerous differences between a 3090 and 4090 series that are the reason for the performance; implying memory speed plays any significant role in this is groundless. Applications do not run in isolation on the GPU, and only with rare exceptions do they use *mostly* the GPU (3D rendering being one of those exceptions). You are still completely wrong in your understanding of how GPUs work in concert with the entire system in 99% of the applications out there. If only the GPU speed and GPU memory speed mattered, people could just ignore PCIe bus speed, whether a card is 4x or 16x, they could ignore the speed of their storage etc. This is so laughably wrong it's hardly worth trying to point out, as you know that ALL of these matter in a computer system. The ultimate question is how much each part contributes in concert both to overall system performance as well as individual application performance.

For this crowd on FM, even the most intensive applications only rely on the GPU for some operations. Even IF your claim that Apple is using the high speed memory mainly for the GPU--and there is no grounds for making this claim--there is simply no basis for making a judgment about what impact it will have if any on total system performance. This is the whole crux of what this discussion has been about. Your statement about a 4090 being a waste for LR is not without merit; equally so it's quite likely that anything more than 100Gb/s bandwidth is a waste for Lightroom too. We just don't have enough data to make that judgment.

This whole conversation started because of the unsubstantiated claims/implications that somehow it's a "bad" thing that the M3 has a reduced memory bandwidth in some of the iterations of the processor. There is zero evidence or even zero reason to believe this will matter. In fact, there is early evidence this is completely unfounded. Early results showing up on GeekBench 6 are showing the M3 base to have average Metal scores about 11% higher than the M2, even with the reduced memory bandwidth. On the CPU side, the average scores coming in to GeekBench show nearly 20% improvement.



Nov 04, 2023 at 02:49 PM
RustyBug
Offline
• • • • • • •
Upload & Sell: On
p.1 #16 · p.1 #16 · MacBook Pro with M3 chip coming soon?


CanadaMark wrote:
What you don't seem to be grasping is that it does not need to communicate faster - there is no bottleneck there and it isn't even close. The communications between the GPU core and the CPU that do use the PCI bus are an order of magnitude less than the communications between the GPU and the VRAM. If you're the only car on the highway, expanding the road from one lane to 100 lanes makes literally no difference.

I did not say it existed in a vacuum. I was simply pointing out that the interactions between the GPU and the CPU
...Show more

It would be interesting to have power consumption and heat generation overlays for these different levels of performance.



Also, talking about the kinds of programs that task GPU ... Noise Reduction, Stacking, Filters, Stitching, etc. different programs harness the CPU vs. GPU for different tasks, too. Imo, being in the middle of workflow and applying a function that is harnessing the GPU ... I'm inclined to have a reduced disruption to my workflow. So, while it isn't the same as processing a video rendering for an extended time ... the mental disruption of something that takes 2 minutes, and can be reduced to 15 seconds can make a diff in your mojo about your work.

That's something that you NEVER SEE anyone do in reviews and benchmarks. They may do a benchmark battery of different tasks or bulk editing, etc. ... but, they really don't show how it plays in the workflow. So, to dismiss the value of GPU performance because it isn't some intensive video rendering program might be overlooking a different kind of value that the GPU brings to the table.

Yeah, in previous rigs ... the GPU wasn't a strong player for stills. BUT, as software has been writing more and more processes to the GPU, its role is increasingly more significant (imo). And, likely ... its role is only going to grow. I mean, Adoble isn't going to rewrite the entire code for all their programs historical / base functions ... but, as they roll out more and more functions / features (other software, also), they have some opportunities to harness the GPU more and more ... forward thinking suggests (imo) that stills software does benefit from improved GPU performance. The threshold of benefit may vary based on individual tasks / prefs,



Nov 05, 2023 at 11:17 AM
RustyBug
Offline
• • • • • • •
Upload & Sell: On
p.1 #17 · p.1 #17 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
equally so it's quite likely that anything more than 100Gb/s bandwidth is a waste for Lightroom too. We just don't have enough data to make that judgment.



All, I know is that when I got the unit with the 100GB/s bandwidth ... it was NOTICEABLY slower.

BUT, we have to realize that the memory bandwidth comes along with the ride of more cores ... as they are paired with the modularity of going from Base > Pro > Max > Ultra.

Probably can't differentiate (ideal separation of points of constraint) fully between the gains ... but, simply put ... I'll take more bandwidth, please ... even, if I don't max out the memory.




Nov 05, 2023 at 11:37 AM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #18 · p.1 #18 · MacBook Pro with M3 chip coming soon?


RustyBug wrote:
All, I know is that when I got the unit with the 100GB/s bandwidth ... it was NOTICEABLY slower.

BUT, we have to realize that the memory bandwidth comes along with the ride of more cores ... as they are paired with the modularity of going from Base > Pro > Max > Ultra.

Probably can't differentiate (ideal separation of points of constraint) fully between the gains ... but, simply put ... I'll take more bandwidth, please ... even, if I don't max out the memory.



Yes but there's a difference in the number of cores between the two, so ascribing the performance to the memory bandwidth is misplaced. Even across machines with the same memory bandwidth, performance scales with core count.



Nov 05, 2023 at 11:41 AM
RustyBug
Offline
• • • • • • •
Upload & Sell: On
p.1 #19 · p.1 #19 · MacBook Pro with M3 chip coming soon?


jhapeman wrote:
Yes but there's a difference in the number of cores between the two, so ascribing the performance to the memory bandwidth is misplaced. Even across machines with the same memory bandwidth, performance scales with core count.


Understood ... but, I think that if I'm trying to do transport multiple things at once, I'd just as soon have a four lane road than a one lane road. The modular approach adds bandwidth by adding more pipelines. Base unit has ONE set of lanes. Pro has TWO sets of lanes. Max has FOUR sets of lanes.

We can quibble all day long about trying to ascribe "quid pro quo" for performance attributions vs. operations, etc.

But, simply put ... the more lanes you have, the more you can transfer for a given period of time. So, while there are plenty of moving pieces, this is one area that I think it provides "rising tide lifts all boats" kind of thing. Particularly when you realize that memory is shared among CPU operations and GPU operations. More lanes, means they don't have to transfer sequentially or "wait in line" to share the road. Share the memory and share the road. Again, I'm far behind the curve on fully understanding the architectural details ... but, the principle of the modular architecture providing 1X, 2X, 4X is something I see as fundamental to the flow of things moving in / out of memory. YMMV


I just think that MORE BANDWIDTH is a good thing in the realm of multiples (i.e. 2X, 4X or 3.5X, etc.). Imo, this is in part why for some operations, you get performance gains that are incremental % (i.e. 20%), whereas some improvements are on the order of greater than 100% (i.e. time reduced by more than 1/2). Those latter ones, I attribute to the bandwidth for operational transfer, the former ones I attribute to clock speed gains.

I'm quite certain that I'm "technically" wrong about how I'm stating things, but it seems rather simple that more bandwidth allows for better transfer / flow in / out of memory, as processing gazillions of operations occur.



Nov 05, 2023 at 12:44 PM
jhapeman
Offline
• • • •
Upload & Sell: On
p.1 #20 · p.1 #20 · MacBook Pro with M3 chip coming soon?


RustyBug wrote:
Understood ... but, I think that if I'm trying to do transport multiple things at once, I'd just as soon have a four lane road than a one lane road. The modular approach adds bandwidth by adding more pipelines. Base unit has ONE set of lanes. Pro has TWO sets of lanes. Max has FOUR sets of lanes.

We can quibble all day long about trying to ascribe "quid pro quo" for performance attributions vs. operations, etc.

But, simply put ... the more lanes you have, the more you can transfer for a given period of time. So, while there are plenty
...Show more

Keep in mind that with the increases in bandwidth come increases in cores and increases in memory--basically like just having more cars on the highway using your analogy. That's the whole reason they increase that bandwidth, by the way, as more cores and more memory needs more pipeline to keep them fed.

I don't know how to make it more clear, but rigorous testing showed you couldn't saturate the memory bandwidth on the earlier M versions, so ascribing performance to that is just misplaced wishful thinking. When you jumped from a base M1 or M2 to a Pro or Max and got more memory bandwidth you also got more cores--both CPU and GPU--and those have very noticeable and measurable impacts on performance.

I will reiterate what I have said above: Early benchmark results are showing the expected jumps in performance of the GPU and CPU, regardless of the memory bandwidth change. Given what has been seen in previous tests and how we know the performance will scale with base frequency and core count, this is not suprise.



Nov 05, 2023 at 01:28 PM
       2       end






FM Forums | General Gear-talk | Join Upload & Sell

       2       end
    
 

You are not logged in. Login or Register

Username       Or Reset password



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.