Talk:Megahertz myth

From RationalWiki
Jump to navigation Jump to search
Icon pseudoscience.svg

This Pseudoscience related article has not received a brainstar for quality. Please consider expanding the article appropriately. See RationalWiki:Article rating for more information.

Steelbrain.png

Meh[edit]

Upon reading this article, I find that the top is missional, and the rest is general knowledge that you can read in a textbook – it even reads like a textbook too. The missional content doesn't even cite the sources of the crank idea and analyze the sources. This article has a long ways to go.—(((CheeseburgerFace))) Spinning-Burger.gif (talkstalk) 02:46, 2 January 2017 (UTC)

This popped up in recent changes and I had a very similar thought 2 years later. The fact is that clock speed is a legitimate measurement of CPUs(and GPUs) and treating like the only thing that matters is kind of a simple error. Like thinking only megapixels matter for cameras or only horsepower matters for cars. It's a specific case of a general kind of mistake. ikanreed 🐐Bleat at me 23:23, 22 February 2019 (UTC)

Ultimately[edit]

Having an article that atleast tries to cut through the assorted ol' "more bits = looks better!"/"more gigs = runs faster!"/"more herz = a whole new experience!"-type computer myths is good, especially since a central theme of what we do as a skeptical resource is to provide consumer information (regardless of if people consume ideas or products, per se). So the goal of the article is worthy. This isn't a comment on the contents of the current article, which I've barely perused. But the idea is highly missional. Reverend Black Percy (talk) 03:05, 2 January 2017 (UTC)

An image we can use[edit]

Ta-daah! Reverend Black Percy (talk) 03:27, 2 January 2017 (UTC)

Relevant?[edit]

https://scalibq.wordpress.com/2012/06/01/multi-core-and-multi-threading/ Reverend Black Percy (talk) 03:28, 2 January 2017 (UTC)

Scrolling down to "The multi-core myth", it looks like it.—(((CheeseburgerFace))) Spinning-Burger.gif (talkstalk) 04:02, 2 January 2017 (UTC)

Goat[edit]

This article is eccentric to RW's mission, as currently written, but with some cleanup, summarization, links, and snark it could be worthy of bronze. --Cosmikdebris (talk) 03:29, 2 January 2017 (UTC)

Not to be confused with[edit]

This beastie. 31.51.113.95 (talk) 08:54, 2 May 2017 (UTC)

we somehow skipped something crucial[edit]

I have no idea how to write this up for the article, but we omit it entirely, and it's pretty important to this idea.

Processor pipeline forking. For any given CISC chip architecture(such as x86 or AMD64) the actual instructions performed by the CPU are just promises of changes in system state after they complete.

You might have a series of instructions that are like this:

ADC RAX, qword ptr ds:0xDEADBEEFDEADBEEF
JC label DOSOMETHING
ADD qword ptr ds:0xFACEFEEDFACEFEED, RAX
JMP label LOOP

A naive interpretation of the first instruction, would be to go to memory, fetch the value, add RAX to it in the accumulator, and write it back to the same memory address, a process that would take, at modern processor speeds, something like 30-40 cycles. To add two integers.

(In addition to caching memory values) processors have developed something they call a "pipeline" where while the instruction pointer is something 20 operations ahead in the program, the circuitry of the computer is already doing work to resolve the address and begin fetching and preload possible values for RAX(based on previous steps in the pipeline) so that when the actual add instruction comes up, it can be completed in a single cycle.

This seems like genius, only problems start to arise when you look at the very next instruction. You're either jumping the instruction register to an entirely new address, invalidating the pipeline you've got, or proceeding where another add operation that would take forever sits.

So they solve this with making multiple pipelines cleverly set up to correctly work in parallel. Except you really can't afford to make a CPU with 2^20 pipelines, so you make some compromises. And the interaction between those compromises and the actual code people write determines big hits or misses on performance.

Which means you can have every single numeric metric on two processor be the same: cache size, clock speed, cores, FLOP implementation, all of it. And still get differences in performance. But I don't know how to describe this problem well for the article itself. ikanreed 🐐Bleat at me 19:35, 10 February 2020 (UTC)