當你更多地瞭解計算機及其工作原理時,你偶爾會碰到一些似乎毫無意義的事情。考慮到這一點,清空磁碟空間真的會加速計算機嗎?今天的超級使用者問答帖子回答了一位困惑的讀者的問題。
今天的問答環節是由SuperUser提供的,SuperUser是Stack Exchange的一個分支,是一個由社群驅動的問答網站分組。
螢幕截圖由nchenga(Flickr)提供。
超級使用者閱讀器Remi.b想知道為什麼清空磁碟空間似乎可以加速計算機:
I have been watching a lot of videos and now understand how computers work a bit better. I understand what RAM is, about volatile and non-volatile memory, and the process of swapping. I also understand why increasing RAM speeds up a computer.
What I do not understand is why cleaning up disk space seems to speed a computer up. Does it really speed a computer up? If so, why does it do so?
Does it have something to do with searching for memory space to save things or with moving things around to make a long enough continuous space to save something? How much empty space should I leave free on a hard disk?
為什麼清空磁碟空間似乎可以加快計算機的速度?
超級使用者貢獻者Jason C為我們提供了答案:
“Why does emptying disk space speed up computers?”
It does not, at least not on its own. This is a really common myth. The reason it is a common myth is because filling up your hard drive often happens at the same time as other things that traditionally could slow down your computer (A). SSD performance does tend to degrade as they fill, but this is a relatively new issue, unique to SSDs, and is not really noticeable for casual users. Generally, low free disk space is just a red herring.
For example, things like:
1. File fragmentation. File fragmentation is an issue (B), but lack of free space, while definitely one of many contributing factors, is not the only cause of it. Some key points here:
2. Search indexing is another example. Say that you have automatic indexing turned on and an OS that does not handle this gracefully. As you save more and more indexable content to your computer (documents and such), indexing may take longer and longer and may start to have an effect on the perceived speed of your computer while it is running, both in I/O and CPU usage. This is not related to free space, it is related to the amount of indexable content you have. However, running out of free space goes hand in hand with storing more content, hence a false connection is drawn.
3. Anti-virus software (similar to the search indexing example). Say that you have anti-virus software set up to do background scanning of your drive. As you have more and more scannable content, the search takes more I/O and CPU resources, possibly interfering with your work. Again, this is related to the amount of scannable content you have. More content often equals less free space, but the lack of free space is not the cause.
4. Installed software. Say that you have a lot of software installed that loads when your computer boots, thus slowing down start-up times. This slow down happens because lots of software is being loaded. However, installed software takes up hard drive space. Therefore, hard drive free space decreases at the same time that this happens, and again a false connection can be readily made.
5. Many other examples along these lines which, when taken together, appear to closely associate lack of free space with lower performance.
The above illustrates another reason that this is such a common myth: While the lack of free space is not a direct cause of slow down, uninstalling various applicati***, removing indexed or scanned content, etc. sometimes (but not always; outside the scope of this answer) increases performance again for reas*** unrelated to the amount of free space remaining. But this also naturally frees up hard drive space. Therefore, again, an apparent (but false) connection between “more free space” and a “faster computer” can be made.
C***ider: If you have a machine running slowly due to lots of installed software, etc., clone your hard drive (exactly) to a larger hard drive, then expand your partiti*** to gain more free space, the machine will not magically speed up. The same software loads, the same files are still fragmented in the same ways, the same search indexer still runs, nothing changes despite having more free space.
“Does it have something to do with searching for memory space to save things?”
No. It does not. There are two very important things worth noting here:
1. Your hard drive does not search around to find places to put things. Your hard drive is stupid. It is nothing. It is a big block of addressed storage that blindly puts things where your OS tells it to and reads whatever is asked of it. Modern drives have sophisticated caching and buffering mechani**s designed around predicting what the OS is going to ask for based on the experience we have gained over time (some drives are even aware of the file system that is on them), but essentially, think of your drive as just a big dumb brick of storage with occasional bonus performance features.
2. Your operating system does not search for places to put things, either. There is no searching. Much effort has gone into solving this problem as it is critical to file system performance. The way that data is actually organized on your drive is determined by your file system. For example, FAT32 (old DOS and Windows PCs), NTFS (later editi*** of Windows), HFS+ (Mac), ext4 (some Linux systems), and many others. Even the concept of a “file” and a “directory” are merely products of typical file systems — hard-drives know nothing about the mysterious beasts called files. Details are outside the scope of this answer. But essentially, all common file systems have ways of tracking where the available space is on a drive so that a search for free space is, under normal circumstances (i.e. file systems in good health), unnecessary. Examples:
“Or with moving things around to make a long enough continuous space to save something?”
No. This does not happen, at least not with any file system I am aware of. Files just end up fragmented.
The process of “moving things around to make up a long enough contiguous space for saving something” is called defragmenting. This does not happen when files are written. This happens when you run your disk defragmenter. On newer editi*** of Windows, at least, this happens automatically on a schedule, but it is never triggered by writing a file.
Being able to avoid moving things around like this is key to file system performance, and is why fragmentation happens and why defragmentation exists as a separate step.
“How much empty space should I leave free on a hard disk?”
This is a trickier question to answer (and this answer has already turned into a **all book).
Rules of thumb:
1. For all types of drives:
2. SSD-specific:
Personally, I usually grab a bigger drive when I have about 20-25 percent free space remaining. This is not related to performance, it is just that when I get to that point, I expect that I will probably be running out of space for data soon and it is time to get a bigger drive.
More important than watching free space is making sure scheduled defragmentation is enabled where appropriate (not on SSDs) so that you never get to the point where it becomes dire enough to affect you.
There is one last thing worth mentioning. One of the other answers here mentioned that SATA’s half-duplex mode prevents reading and writing at the same time. While true, this is greatly oversimplified and is mostly unrelated to the performance issues being discussed here. What this means, simply, is that data cannot be transferred in both directi*** on the wire at the same time. However, SATA has a fairly complex specification involving tiny maximum block sizes (about 8kB per block on the wire, I think), read and write operation queues, etc., and does not preclude writes to buffers happening while reads are in progress, interleaved operati***, etc.
Any blocking that occurs would be due to competing for physical resources, usually mitigated by plenty of cache. The duplex mode of SATA is almost entirely irrelevant here.
(A) “Slow down” is a broad term. Here I use it to refer to things that are either I/O-bound (i.e. if your computer is sitting there crunching numbers, the contents of the hard drive have no impact) or CPU-bound and competing with tangentially related things that have high CPU usage (i.e. anti-virus software scanning t*** of files).
(B) SSDs are affected by fragmentation in that sequential access speeds are generally faster than random access, despite SSDs not facing the same limitati*** as a mechanical device (even then, lack of fragmentation does not guarantee sequential access due to wear leveling, etc.). However, in virtually every general use scenario, this is a non-issue. Performance differences due to fragmentation on SSDs are typically negligible for things like loading applicati***, booting the computer, etc.
(C) Assuming a sane file system that is not fragmenting files on purpose.
請務必透過下面的連結閱讀SuperUser的其他生動討論!
有什麼要補充的解釋嗎?在評論中發出聲音。想從其他精通技術的Stack Exchange使用者那裡瞭解更多答案嗎?在這裡檢視完整的討論主題。
...安全地刪除以釋放磁碟空間的Windows檔案和資料夾,以及為什麼要刪除它們。請注意,其中一些資料夾位於受保護的位置,因此刪除它們時要小心。 ...
...求。這似乎是不公平的遭受凍結,FPS下降,或效能下降。為什麼不透過一些消除壓力的最佳化來解決這個問題呢? ...
...的是列印工作的停滯。你可以試著列印一份檔案,但不知什麼原因它不起作用。這會導致其他計算機任務暫停。 ...
...用CCleaner。但那是幾年前的事了——這個應用程式發生了什麼變化,現在值得使用嗎?讓我們重新審視一下。 ...
...用內建工具對其進行碎片整理和最佳化。下面是方法。 什麼是碎片整理(defragmentation)? 隨著時間的推移,組成檔案的資料塊(片段)可能分散在硬碟表面的多個位置。這叫做碎片化。碎片整理會移動所有這些塊,使它們在物理...
當磁碟空間不足時,Windows10的儲存感知功能會自動執行。它還會自動刪除回收站中超過30天的檔案。在執行2019年5月更新的PC上,預設情況下此選項處於啟用狀態。 這是一個有用的功能!如果您的計算機磁碟空間不足,您可能需...
...的檔案列表。其中很多都是垃圾,但是如果你知道你要找什麼(比如圖片),你可以對它們進行排序,然後開啟你想要的資料夾。大多數檔案應該在你的主目錄下,上面有你的名字。 找到檔案後,右鍵單擊它們,選擇“恢復”...