-
The E6600 and E6420 and E6320 are all 4M, so theoretically the performance is the same when the frequency is the same. But the difference is not without it:
1, E66 has a high frequency multiplication, you don't need a strong motherboard to get to that frequency, if you run 1M Pi on 4G or something, you calculate how many external frequencies E6320 needs. How many boards can be reached? Even if you buy a board like that (it's certainly not cheap), but it comes with a low-end U, isn't it a deformed configuration?
2, the absolute frequency that e66 can exceed on average will be higher (of course, it depends on the individual, it still depends on the rp), I often see players with e66 running on 4G or something, but with e6320 or e6420 want to go up to this frequency, very little.
3. Even if it exceeds the same frequency, the power consumption is not the same, E6600 and E6420 are both 65W by default, but the latter has to reach the default frequency of the former, even if it is not pressurized, the power consumption will still increase. That said: to buy a low-end U Mania, you need to spend more on electricity and invest more in heat dissipation as the price of overclocking.
To sum up: overclocking is a test of the performance of the whole machine, not just the CPU, and as the external clock increases, you will run into problems in other areas. In addition, under the same main frequency, the high bandwidth of the external frequency is also large, so it will be more useful to run the memory test, but you will not feel the difference in actual application.
Stressing the motherboard can also create additional heating issues.
One more word for you: only the wrong one, not the wrong one. You get what you pay for (except when the RP is excellent or very bad).
-
Update to BIOS version 0906 or later
It can be more than 4G.
-
The physique of each CPU is different, you slowly add it up, knowing that the system is unstable.
This is a long process, and each point is run for a period of time to check the stability of the machine.
-
The bigger the ghz of the processor, the better, for example, the CPU is definitely better than that.
Theoretically, the higher the better, but it also depends on what the structure is, only if the brand is the same and the structure is the same, this is comparable, otherwise it is not comparable. For example, in the past, the CPU frequency of Ben D925 could be reached, but her performance was not as fast as the processing speed of the Core architecture.
So the simple hz is not accurate, in a word, the CPU is a penny, expensive is expensive, the newer the style, the better, the higher the better, the better, you can't understand the meaning of all the data, just look at it**buy it.
Different concepts of frequency:
The external frequency is the base frequency of the CPU, and the unit is also MHz. The external frequency is the speed at which the CPU and the motherboard run synchronously, and the external frequency in most computer systems is also the speed at which the memory and the motherboard run synchronously.
The overall CPU quality can only be judged by comprehensively checking the CPU's main frequency, bus, L1 cache, and L2 cache. CPU frequency is the clock frequency of the CPU.
In the same series of microprocessors, the higher the clock speed, the faster the computer, but for different types of processors, it can only be used as a reference. In addition, the computing speed of the CPU also depends on the performance indicators of all aspects of the CPU pipeline.
-
The performance of the CPU does not necessarily depend on the main frequency, there are many aspects;
1. Process. Now 65nm is popular, Intel already has 45nm, lower nm number means lower energy consumption, more advanced technology.
2. The number of steps, the previous Intel was basically 30 steps, but now the new structure is 15 steps, which is equivalent to doing a thing, you need 30 steps, people need 15 steps, even if you have a high frequency, you still can't run others.
3. Cache, I personally think that the first-level cache is more important than the second-level cache, but now it is popular to say that the second-level cache, so I will compare the second-level cache, of course, the bigger the better.
4. The number of cores, the current dual multi-core is basically only GHz, if it is a single core 1A few GHz is indeed a little less, but now Intel has also launched a single-core core with a core structure, which is the new Celeron 420Don't know how the performance is.
Generally speaking, how much faster memory can increase the speed, pay attention to ha, the frequency of memory is.
The same ha. Try to make the same brand.
-
A high core multi-frequency is best.
When doing a simple job.
There is no difference between the high frequency of a single core and the low frequency of a dual core.
There are many tasks to work at the same time.
The advantages of more than dual cores are much more obvious.
-
At present, the highest frequency CPU of Intel's official design is the latest ninth-generation Core i9 9900K The highest dynamic frequency is, and it is also one of the top 20 home CPUs with the strongest performance at present, and there is a higher performance than this CPU, but the main frequency is not so high, but the number of cores has increased.
The second place is the i7 9700k.
-
It's true that mainstream** processors rarely exceed 4GHz after '05. Denard's Law says that the number of transistors on a single core can double every two years, but after 05 years it hit a bottleneck, and then Denard's Law was rarely mentioned, and only parallel operations could be relied on to continue Moore's Law. We can see that the green line in the chart above indicates the frequency of the mainstream** processors, which have not only failed to rise, but have also been declining recently.
Engineers are born to make trade-offs for the entire system. And in the days when both Denard's Law and Moore's Law were in effect, electronics engineers didn't have to weigh in on one thing: make transistors smaller.
There is no need to weigh it, as long as it is small, it is simply the first era of electronic engineers. Fifty years have passed, the time has passed, and engineers have to make trade-offs.
Bottlenecks that need to be weighed include memory read bottlenecks, instruction-line parallel processing bottlenecks, and cooling bottlenecks. And the heat dissipation bottleneck is considered to be the most difficult to overcome.
Let's start with memory reads. The processor and memory are two separate areas that rely on a channel connection, which has bandwidth. The number of transistors in processors is doubling every two years, but bandwidth is not growing as fast as it can, so memory reads and writes take more time than processors compute, weakening the incentive to innovate.
Let's talk about the parallel processing of the command line. Every time the CPU clock speed doubles, as long as all the components can complete the calculation, the operating speed of the entire system can be doubled. However, every component and wire has physical properties; There is a delay after the electrical signal passes through the component, which is affected by the resistance and capacitance of the component; Electrical signals can also only travel at the speed of light in a medium in wires, less than 300 million meters per second and about 200 million meters per second.
At 4 GHz, it is 4 billion clocks per second, and the electrical signal can only advance by 6 centimeters. In the past 20 years, in order to speed up the main frequency, the industry has adopted a pipelined design after other methods have reached the limit, and a command is split into multiple clocks to complete, so as to increase the throughput per unit time while the completion time of an instruction remains unchanged. The above figure is a five-level pipelined processor architecture designed when I studied architecture, which splits a mechanical language into five levels of instructions, decoding instructions, processing, writing back to memory, and writing back to registers.
The current technology of Intel i7 has broken down the entire instruction to 24 levels[2], which is outrageous in such a detailed split, and it is difficult to imagine how to improve it.
And temperature is the dreadest trade-off. Above 4GHz, the performance loss due to temperature rise outweighs the performance gain from instruction-line processing acceleration. If the first two bottlenecks can be solved by design, then this bottleneck can only be solved by new materials or processes.
And the physical components that modern computers rely on, MOSFETs, the full name of metal oxide semiconductor field effect transistors, have no better replacement products for forty years from the appearance of Denard's law to 05. A new batch of MOSFETs has been released, but it has not fundamentally solved the problem of heat dissipation.
-
Is it hard to exceed 4G? Are you trolling me?
-
processor, and if you want it to run faster, you can overclock the processor to make it run next. Solemnly declared. For example, running at 3200MHz. This is a review of how many hours the processor has gone through in a second.
-
It depends on your needs and the number of cores, and frequency is not everything.
-
The main reason is heat dissipation, after increasing the main frequency beyond a certain range, the heat density increases rapidly, which is very uneconomical and also causes heat dissipation difficulties.
Looking back at 2004, Intel was ambitious and announced that the Pentium 4, which was the best Prescott ultra-long assembly line, would release a 4GHz mainstream CPU, but the final result was stopped for various reasons. After that, the main frequency did not advance and retreated, and it was not until the Core 4th generation (4790K), codenamed Haswell, actually stood on 4GHz, and the successors Broadwell, Skylake, Kabylake and CoffeeLake did nothing about the frequency increase. More than ten years have passed, why can't the CPU clock speed continue to increase?
What's going on? Have I already reached the frequency ceiling?
We know that if you want to improve the computing performance of the CPU, you can't simply stack the core. So can we simply increase the CPU clock speed so that each CPU core can calculate the result faster? Why does Intel, which holds the CPU process, no longer climb the peak of the main frequency?
In fact, the bottleneck is mainly heat dissipation.
When the input level is low, the CL is charged, and we assume that the energy of the A joules is stored in the capacitor. When the input goes high, this energy is released, and the energy of the ajoules is released. Because CL is very small, this A is also very small, almost negligible.
But if we flip this FET at 1GHz, the energy consumption is A 10 9, which is not negligible, and with the billions of FETs in the CPU, the energy consumed becomes quite substantial. <>
-
At present, CPU ASICs basically use CMOS logic circuits, and the improvement of the process level reduces the CMOS tube delay, which makes it possible for each instruction to be completed in a shorter clock cycle. That is, the main frequency can be increased with the deepening of the process. Let's take a look at why process deepening will reduce the delay of the transistor, the following figure is a cross-sectional schematic diagram of a CMOS transistor, the switching speed of the transistor is affected by many factors, including electric field strength and electron mobility.
The strength of the electric field is affected by the voltage applied to the source-drain poles and the length of the channel.
The narrower the channel length (that is, the commonly referred to as 28nm and 16nm processes), the greater the electric field strength and the faster the switching speed of the CMOS tube. However, as the process progresses, it becomes more and more difficult to shorten the channel length, which means that the delay of the CMOS tube is difficult to reduce any longer. So why do we have to keep improving the process, the purpose is to accommodate more transistors in a smaller area, so that the chip performance can be improved through more complex circuit designs such as parallel operations, behavior, etc.
As the process deepens, the nightmare of power consumption chip design becomes more and more prominent, and the reason for the increase in power consumption is no longer here. The graph below shows the change in power consumption of the i7 as the operating frequency increases. When the operating frequency is exceeded, the power consumption of the chip increases dramatically.
Because the design temperature range of the chip is usually -40 degrees Celsius to 125 degrees Celsius, if the heat accumulation caused by the power consumption causes the temperature of the sidebucket chip to not exceed this range. However, due to the deepening of the process, the number of transistors per unit area has increased, and the heat accumulation per unit area has become more obvious, and limited by the consideration of packaging and cooling costs, almost all large-scale chips can only care about power consumption very harshly, but because of the inherent architecture defects of X86, it is difficult to do a good job in power consumption, which is why X86 has not been able to do well in the field of mobile products. <>
-
The main performance indicators of CPU include main frequency, external frequency, frequency multiplier, front-side bus, capacity and rate of primary and secondary caches, core working voltage of working voltage, address bus width, data bus width, and MIPS.
To judge the quality of the CPU, you should combine the above performance indicators.
The GHz you are talking about is the main frequency of the CPU
-
Definitely not! If the GHz is high, it can only mean that the CPU speed is fast enough, but it is only fast enough, and it does not mean that the performance is good.
To evaluate the performance of a CPU, in addition to looking at the main frequency, the cache is also very important, what is the cache? To put it simply: because the speed of the CPU is very fast, the speed of other hardware such as memory and hard disk cannot keep up, and the CPU has to wait when reading data, and setting the cache can put the data to be read by the CPU in the cache in advance, and the speed of the cache is very fast, which significantly improves the operation efficiency of the CPU.
Then the larger the cache capacity, the better the execution efficiency of the CPU, because the CPU speed is getting faster and faster, in order to play the performance, there are Level 1 cache and Level 2 cache.
You must know about Pentium and Celeron, they tend to be the same GHz, but why is one so expensive and the other so cheap? Because the comprehensive performance of the Pentium is much better than the competition! Why is it so much better? The point is that their L1 caches are so different from L2 caches!
I think, having said that, you should understand it, looking at the performance of the CPU, you should look at both its main frequency and its cache.
The fastest vehicle at the moment is 15 times the speed of sound of the X-51, and that thing is a human running a rocket, and it can't even turn a corner. >>>More
Of course, it is to tighten the screws, but it should be noted that it is screwed symmetrically, and the force should be evenly screwed, and 80% of the force is sufficient.
Honored with your question!
To give you two sets of opinions. >>>More
1.The qualities of a person.
2.A person's quality is actually similar to Mei Xiang, which is intangible. If we can't experience it carefully, it will be difficult for us to taste the aroma of a person's personality hidden inside the outside. >>>More
That's right. The KT motherboard is the Via's north-south bridge chip, which can only use the Athlon or Sempron 462-pin CPU produced by AMD >>>More