xt4文件系统delayed allocation相关研究
最近在一个项目上测试录音时,发现有丢数据的现象。通过串口发现打出了很多overrun的log。
overrun是驱动层给上层应用的一个通知,告诉上层数据取的太慢了,buffer被塞满了。
如果buffer塞满之后,上层仍不能及时取走数据,自然会导致数据丢失。
上层应用取数据过慢,能想到的有两个原因:
1、cpu繁忙,录音进程不能及时抢占到cpu。
2、录音进程写文件时速度过慢。
通过top命令,发现cpu并不繁忙,排除了第一种可能。
分析第二种可能。如果是写文件过慢,换一种介质是不是会有所不同。
最初是将录音文件写到nand flash上,有overrun错误和丢数据现象。
将录音文件保存到sd卡上,发现overrun错误没了,丢数据现象也没了。
基本上可以肯定是写文件过慢导致的了。
既然是文件写入过慢导致,那就比较比较nand flash与sd卡文件写入速度的差别。
使用命令:
sync; date; dd if=/dev/zero of=/data/xxx bs=4096 count=40960; sync; data
sync; date; dd if=/dev/zero of=/sdcard/xxx bs=4096 count=40960; sync; data
data是nand flash上一个分区。
测试发现,写入160M数据,nand flash耗时35s-45s不等,而sd卡耗时在20s左右。
会不会是其他进程也在访问nand flash导致?
打开blok_dump开关:echo 1 > /proc/sys/vm/block_dump,
通过命令while true; do dmesg -c; sleep 1; done发现有几个进程会经常访问nand flash。
将那几个进程干掉。
重新测试,发现nand flash速度依然如故。
有点费解,难道是什么地方设置或者其他进程影响?
为了尽可能减少其他进程影响,进入到recovery模式进行测试。
recovery模式下没有mount nand flash分区,手动mount:
mount -t ext4 /dev/block/mmcblk0px /data
然后进程测试。
sd卡的速度与正常模式下基本相同,nand flash速度大幅提升30%左右。
那就看看recovery模式下与正常模式下nand flash有什么差别。
recovery模式下,nand flash分区是通过命令:
mount -t ext4 /dev/block/mmcblk0p8 /data
挂载的。
正常模式下,是通过init.rc中的命令挂载的。
比较两处命令的差别,发现init.rc中的mount命令多了几个参数。
通过排除发现是参数nodelalloc的影响。
在recovery模式下,如果mount时加上nodelalloc,速度基本上与正常模式下相同。
正常模式下,去掉nodelalloc,速度也上去了。
delalloc是何方神圣?
原来是ext4引入的新技术。
以下是kernel中ext4文档中的介绍:
delalloc (*) Defer block allocation until just before ext4
writes out the block(s) in question. This
allows ext4 to better allocation decisions
more efficiently.
nodelalloc Disable delayed allocation. Blocks are allocated
when the data is copied from userspace to the
page cache, either via the write(2) system call
or when an mmap'ed page which was previously
unallocated is written for the first time.
通过上面的介绍,基本上知道delalloc是什么。
但为什么会有delalloc的诞生,还需要继续研究。
通过ext4 wiki (参考[1]),找到关于ext4相关介绍(参考[2])。
其中对delay allocation的说明如下:
2.6. Delayed allocation
Delayed allocation is a performance feature (it doesn't change the disk format) found in a few modern filesystems such as XFS, ZFS, btrfs or Reiser 4, and it consists in delaying the allocation of blocks as much as possible, contrary to what traditionally filesystems (such as Ext3, reiser3, etc) do: allocate the blocks as soon as possible. For example, if a process write()s, the filesystem code will allocate immediately the blocks where the data will be placed - even if the data is not being written right now to the disk and it's going to be kept in the cache for some time. This approach has disadvantages. For example when a process is writing continually to a file that grows, successive write()s allocate blocks for the data, but they don't know if the file will keep growing. Delayed allocation, on the other hand, does not allocate the blocks immediately when the process write()s, rather, it delays the allocation of the blocks while the file is kept in cache, until it is really going to be written to the disk. This gives the block allocator the opportunity to optimize the allocation in situations where the old system couldn't. Delayed allocation plays very nicely with the two previous features mentioned, extents and multiblock allocation, because in many workloads when the file is written finally to the disk it will be allocated in extents whose block allocation is done with the mballoc allocator. The performance is much better, and the fragmentation is much improved in some workloads.
通过上述说明可知,将Delayed allocation与Extents、Multiblock allocation结合,可以大幅提升alloc的性能,并能改善碎片的问题。
接下来就该看看delayed allocation是怎样工作的。
同样在ext4 wiki (参考[1])中有个连接: Life of an ext4 write request(参考[3])。
其中对delayed allocation情况下的write请求进行了分析。
其中有一点要特别说明下,如果disk上的空间过少时,应该关闭delayed allocation功能。
因为delayed allocation的情况下,block是在真正写数据的时候才申请,而不是在上层应用调用write请求时。
如果在申请block时发现空间不足,导致写入失败,此时上层的write请求已经返回,所以没办法将错误通知给上层,最终导致上层应用认为写成功了,而实际没写入到介质中。
详细见下面说明:
In ext4_da_write_begin(), there's a potential fallback to nodelalloc mode. That happens if we are low on space (and possibly low on quota; but not sure). That's because when we estimate how much space is needed, we can guess wrong, especially as it comes to metadata allocation. We tend to guess high, because in particular for ENOSPC, we don't want to run out of space when we need to allocate an extent tree block. That's because in the delalloc write request, we don't actually do the block allocation until the writeback time --- and at that point we can't return an error to userspace. If we fail to allocate space at writeback time, data can potentially be lost without the calling application knowing about it. This is not the case for direct I/O, of course, since it doesn't use the writeback; but delalloc is all about what happens for buffered writes.) So when we come close to running out of disk space, we will turn off delayed allocation.
delayed allocation还有一个弊端,就是可能导致某一次的写入延迟特别长(参考[4])。
原因是,既然allocation都被delay了,肯定在writeback的时候要多花费时间。
writeback时会申请一把锁,上层应用write请求时也会申请这把锁。
如果上层应用write请求时刚好遇到writeback占用了锁,并进行了很长时间的操作,这个时候,上层应用的write请求只能等着了。