
And that machines with older AI acceleration hardware, or none at all, will have to take longer. My hardware list is at the end…really nothing special from a CPU/GPU point of view, so I strongly suspect it’s the Neural Engine machine learning hardware acceleration that cuts the times. I just did some tests and below is what I got. On my mid-range but recent laptop, I have so far never seen a Denoise time longer than a minute. Hardware acceleration specifically for machine learning/AI might be a missing link that helps explain why some computers can process AI features so much faster than others, and might help explain why older computers take much more time. I think the Ian Lyons post that Victoria linked to may hold part of the answer to this.
