-
tom'S Hardware** did a very meaningful test, which was to tell us what the difference in performance is between 8M and 16M cached hard drives, and to tell us if 16M cache is worth it.
The cache of the hard disk is mainly used to cache data, leaving the data in the cache to speed up the transfer speed of the data being requested again, and the special NCQ instructions of the hard disk in the SATA era also need a certain cache space, the hard disk was 2M cache before, and now 8M and 16M are not uncommon, so is it necessary to use the hard disk of 16M cache?
They tested four Seagate Barracuda 10 products, with two caches of 8m and 16m. Tests have shown that the 16MB cache does not provide a significant performance improvement for hard drives, except for the "Program Performance: File Write" advantage of the Ultra ATA model, as is the case with both SATA and PATA interfaces.
Detailed address (in English.
-
The bottleneck in computer performance is the hard drive.
The easiest way to improve the performance of your hard drive is to increase the cache.
If the price difference is average, there is 16m and 8m is definitely not considered.
-
Just one. The difference between rich and not.
There is money to buy 16, but there is no money to buy 8
-
The cache provides high-speed data buffering for reads and writes to HDDs. Larger cache capacity can greatly improve the burst read and write speed of HDD, especially when HDD needs to modify data frequently, which can maximize its performance and greatly improve the life of HDD.
In fact, compared with the different caches of 1T hard disks, the difference between 32M and 64M caches is almost negligible.
The biggest improvement is 2-8M cache, and the difference between 16M and 64M is not very large.
-
The transfer speed varies greatly, 16MB, 32MB, 64MB, which is the cache of the hard disk. Cache is also known as cache (in KB or MB). The cache is the place where the hard disk and the external bus exchange data, and the reading process of the hard disk is converted into an electrical signal after the magnetic signal is converted into an electrical signal, and then sent out step by step according to the PCI bus cycle through the filling and emptying, refilling, and re-emptying of the cache, so the role of the cache should not be underestimated, and the capacity and speed of the cache can be directly related to the transmission speed and service life of the hard disk.
However, depending on the interface, if it is connected to the SATA2 socket, it is obvious if it is connected to SATA3, so first see if your motherboard supports the SATA3 interface.
-
The read and write performance and service life of the 2M cache hard disk are better.
The main function of the hard disk cache is to store the data that often needs to be read and written in the cache, and the computer gives priority to the data in the cache when it is used, and then writes it to the hard disk when the data in the cache reaches a certain amount, which can reduce the actual disk operation, effectively protect the disk from damage caused by repeated read and write operations, and also reduce the time required to write.
The 32M cache is larger than the 16M cache, and it is relatively more protective of the hard disk, so without considering other factors, the hard disk life of the 32M cache will be longer than that of the 16M cache.
Disk caching reduces the number of times the CPU reads the disk machine through the IO, improves the efficiency of the disk IO, and uses a piece of memory to store and access the disk content more frequently; Because the access to memory is an electronic action, and the access to the disk is an IO action, it feels like the disk IO has become relatively fast.
It is precisely because of the increase in read and write speed brought by large cache and in order to cope with the competition of solid-state drives, traditional mechanical hard disk manufacturers have launched hybrid hard disks, that is, a combination of mechanical hard disks + large caches + large-capacity flash memory. The advantages of this are obvious, firstly, the large-capacity flash memory makes the system run faster, and secondly, in the data transfer process, the data is transferred to the flash memory at the speed of the SSD, then goes to the cache, and finally continuously reads and writes to the HDD, and the large-capacity flash memory plays a certain buffering role (as long as the transmitted data is not much larger than the flash capacity, the transfer speed can be on par with the SSD).
-
Normal use, almost no difference.
Data is written to the cache before being written from the cache to the disk. Read data as well, first put in the cache and then read the data from the cache.
Hard disk access exchanges data between the hard disk and memory, and if the cache is large, it also increases the transfer speed of data. The data is more visible during the copying process. However, the difference between the 64M cache and the 128M cache of the HDD in daily use is not noticeable.
-
Under the premise of the same interface type and the same spindle motor speed, the larger the cache capacity, the higher the storage efficiency of the hard disk.
A disk can be thought of as a large warehouse, and a cache is a work area for loading and unloading. The larger the operation area, the larger the amount of data throughput per unit time, which can be understood as the faster the loading and unloading truck, the shorter the time for the head to stop reading and writing and waiting for data loading, and the higher the execution efficiency.
-
In the case of the same cache, the 7200 is fast, because the speed is fast, just like two people with the same body type and weight, one running fast, the other slow, the same as the 7200, looking at the cache, it is like moving things in the same time, but I move 16 at a time and I move 32 at a time, and the difference in the number of things I get is the same.
There are also 32M cache hard disks, because the number of reads and writes is less than that of 16M cache hard disks, so the life should be longer. The higher the speed of the hard disk platter, the more data the head can read from the platter per unit time, and the more data is transmitted, which is the appearance of fast read and write speed.
-
I've tried 4 x 32M cache dual disk 500G group arrays and 2 x 16M cache single disk 500G group arrays, what I didn't expect is that two 16M caches, single disk 500G, actually hurry up LZ, buy a single disk 500G, 16M cache this, the firmware is no problem
-
The following is good, the speed is faster, DVI is digital transmission, VGA is analog.
Hello! Such a mobile phone configuration can only be average! Hope it helps!
The so-called CPU cache is a storage area that comes with the CPU, which is much faster than the memory speed. The so-called L2 cache is slower than the L1 cache, cheaper, but much faster than memory. The low-end ones are more common at 128K and 256K, and the better ones are 512K, and the 1M L2 cache is already very high, and it is now the mainstream high-end chip that AMD and P4 compete in terms of time, such as P4 506 and AMD 64-bit 3000+.
1. CPU cache (cache memory) is located in the temporary memory between the CPU and the memory, its capacity is smaller than the memory but the exchange speed is faster. The data in the cache is a small part of the memory, but this small part is about to be accessed by the CPU in a short period of time, and when the CPU calls a large amount of data, it can be called directly from the cache without memory, thus speeding up the read speed. It can be seen that adding a cache to the CPU is an efficient solution, so that the entire internal memory (cache + memory) becomes a high-speed storage system with both cache and large capacity of memory. >>>More
Professional: Don't mislead the two upstairs, On the current multi-core CPU world, 1024 is not, several cores share 1024 L2 cache, there is no data exchange problem between caches, Intel's core series is such an architecture, 2x512, means 2 cores, each core has exclusive 512 L2 cache. AMD's U is such a design, the design of the second-level cache, there are size, speed constraints, because of Intel's U, the memory controller is outside the core, in this way, the exchange speed between the memory and the CPU can not be very fast, in order to reduce the number of low-speed exchanges between the CPU and the memory, therefore, Intel designs the second-level cache as a whole, and the capacity is relatively large, which is equivalent to, a big house is full of food, and nearby residents can quickly and easily obtain food and share resources, and AMD's U, because the memory controller is directly integrated into the CPU, his front-side bus, very fast, and, the cost of the L2 cache is relatively large in the CPU, AMD due to architectural reasons, can not design the L2 cache into a shared mode, so it can only be exclusive to each core, and then through the HT bus (AMD's proprietary front-end bus bus) to connect the 2 CPUs, this way is a bit backward, so the current AMD technology, It was abandoned by Intel for a whole generation, but the speed of the computer is not only determined by the CPU, because the AMD core integrates the memory controller, which is higher than the memory controller that Intel puts on the motherboard, the speed is higher, so, overall, the difference is not very big, but in terms of CPU monomer, Intel is still strong.
This also reduces the opportunity to read page files in the system disk, and reduces the pressure on the system disk, and the maximum value cannot exceed the remaining space value of the current hard disk.