开发者

Understanding Frequency / Oscillation of a Chip

I have literally no clue开发者_StackOverflow what it is. Wouldn't even know how to go about finding what it is. I would greatly appreciate any help.


Think of the tick-tock sound of a grandfather or other pendulum based clocks. A complete tick and tock cycle is usually one second in that case. And with clocks and watches we went from gravity and/or spring powered mechanical to something crystal powered electrical. Certain crystals can be used in circuits in such a way that they create an electrical oscillation. Digital electronics now use crystals as well as you can get high speed and very accurate timing. So the clock in this case is the electrical output of this crystal oscillator, a 10,000,000Hz (or 10MHz) clock means there are 10 million electrical tick-tocks per second. Feed that into AND, OR, and NOT logic and you can run processors and peripherals.

Lets limit the discussion for a second to older processors or microcontrollers, something where the processor and the memory and everything uses the same clock. With this clock signal feeding the digital logic you can have logic that reads instructions from memory and executes those instructions at some derived rate. Loading a register with a value from memory (a read) may take 3 clock cycles, one clock cycle to fetch the instruction from memory, another clock cycle to decode and begin to execute that instruction, and because the instruction is a read from memory and the memory cannot do two things at once (in this model) then that is a third clock cycle to read from memory, then the next instruction is read and executed and so on. So some processors vary their execution time for these reasons, there is always a fetch of the instruction from memory and a decode and execute, and each of these steps takes one or more clock cycles. Some processor designs choose to use a fixed lets say 4 clock cycles for every instruction cycle with the longest executing instruction for that processor being 4 clock cycles, and some may take only one clock cycle.

Then things got better and you hear the word pipeline, and you see advertisements of an x86 being able to execute one instruction per clock or superscalar where it executes more than one instruction per clock. that is a bit misleading. What the pipeline does is create an assembly line if you will. think of the factory tv shows, how its made and others. The machine that makes padlocks, has many steps along the assembly line, each step performs one simple operation, paint the numbers on the dial, move the dial onto a shaft, insert the dial plus shaft into the unit, etc. if all you looked at was the last step it would look like they were making one lock per per cycle of the assembly line, when it may have actually taken 30 clocks per lock. With processors on one clock cycle one member on the pipeline fetches the instruction from ram, the next memory decodes the previous instruction, the third is executing two instructions ago, etc. So when you hear flushing a pipe or a pipeline stall what that is is the assembly line has stopped or they have to stop the assembly line and throw out every item on the line and start fresh. traditionally when you have a branch instruction that means the two or few instructions right after it are not going to be executed, you have to flush the pipe and start to fill it again from the address where the branch has lead.

If you have been around long enough or do some googling or wikipedia reading you will hear about the 486DX processor. It was the beginning of something that is extremely common now. Before that time there was a single crystal oscillator based clock into the processor, you wanted to run your processor at 16MHz you put a 16MHz clock. A few problems, one was that memory interfaces were not able to keep up, a very simple solution to that is you can take those pins on your processor and talk to the memory at a reduced rate, say your processor runs at 16Mhz you could run the memory at 8Mhz by creating a new clock that is half the rate of the main clock. So from the tick-tock-tick-tock at 16mhz you change the output to the memory only on the ticks. So the first internal tick puts a tick on the memory bus clock, the first tock, no change on the memory output, the second tick you put a tock on the memory bus clock, third tick a tick on the memory, fourth tick a tock on the memory bus and so on. The real problem came when the I/O itself, all of the pins on the processor were not able to run at the clock rate that transistors inside could run. So with the 486DX what they did was take a 25MHz clock, and using some analog magic turn that into a 50MHz clock inside the part, the edges of the part, the memory and other buses ran at 25MHz or slower, but the processor, so long as you could feed instructions fast enough (this leads into a talk about caches), would run the pipeline at 50MHz. In the bursts without stalls or flushing that would be 50 million instructions per second. Creating a stable 2x multiplier at the time was tricky business. but today it is relatively trivial. Modern multi giga hertz processors do not use gigahertz clocks they often use clocks around 100MHz and scale them up to a few gigahertz for the processor cores, then you may hear about 800mhz DDR memory or 1066MHz memory or 1333, etc. Same deal the 100MHz clock reference is multiplied up to create those speeds for the memory bus.

So in modern computers, as with the old days, you still use crystal based oscillators as a clock source, the magic of nature, from that you create many different speed clocks for the processor and peripherals around the computer. For example you can google the various clock rates for USB interfaces and firewire and hard disk interfaces (ide/sata), etc.

I have seen your other questions about PIC programming for example, the PIC, is in that traditional type of mode where everything runs off of a single clock at a single clock rate, and if you look at one of my answers there you can see the counting of the "clock" cycles, and count them using units of cycles. Once that is done then you can apply a unit of time to it multiply or divide as the case may be with a 4Mhz clock or 1MHz clock etc. The same section of code may take 100 cycles to execute, a processor using a 4MHz clock will execute those 100 instructions 4 times faster than if you used a 1MHz oscillator. As drhirsch implied it is almost linear, at least for simple cases like this. For modern computers if you are reading data from the same hard drive with the same code a 3ghz processor is not 3 times faster than a 1 ghz processor because the hard drive speed is the same slow speed and both the 3ghz processor and 1ghz processor are stalled waiting for data from the hard disk. You may have a taxi cab that is a ferrari, but if you live in a town with older, slower folks that is 1 mile long and wide, your ferrari is going to be parked most of the time loading and unloading passengers and not speeding for a few hundred yards. A minivan would actually be faster than the ferrari (getting folks in and out is the bottleneck).


In this context the clock frequency or clock rate is the rate, at which single commands or the smallest indivisible parts of commands in a CPU are executed. It is the inverse of the length of a clock cycle.

Examples: One clock cycle on a Z80 lasts 250 ns (because of its frequency of 4 MHz), on a Phenom it lasts 0.333 ns (its frequency is about 3 GHz).

In an older Z80 moving 8 bit data from one cpu register to another needed 4 clock cycles, where in a phenom the same operation needs one cycle - and up to 3 such instruction can be done in parallel.

This depends obviously on the architecture of the CPU, and for a given cpu type there is a almost linear dependency of execution speed and clock frequency.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜