#
afcb7ca0 |
| 25-Apr-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: check truncation of mapping after lock_page
We call lock_page when we need to update a page after readpage. Between grab and lock page, the page can be truncated by other thread. So, we should
f2fs: check truncation of mapping after lock_page
We call lock_page when we need to update a page after readpage. Between grab and lock page, the page can be truncated by other thread. So, we should check the page after lock_page whether it was truncated or not.
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
55008d84 |
| 25-Apr-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: enhance alloc_nid and build_free_nids flows
In order to avoid build_free_nid lock contention, let's change the order of function calls as follows.
At first, check whether there is enough free
f2fs: enhance alloc_nid and build_free_nids flows
In order to avoid build_free_nid lock contention, let's change the order of function calls as follows.
At first, check whether there is enough free nids. - If available, just get a free nid with spin_lock without any overhead. - Otherwise, conduct build_free_nids. : scan nat pages, journal nat entries, and nat cache entries.
We should consider carefullly not to serve free nids intermediately made by build_free_nids. We can get stable free nids only after build_free_nids is done.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
9198aceb |
| 24-Apr-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: check nid == 0 in add_free_nid
It is more obvious that add_free_nid checks whether the free nid is zero or not.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <
f2fs: check nid == 0 in add_free_nid
It is more obvious that add_free_nid checks whether the free nid is zero or not.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
c718379b |
| 23-Apr-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: give a chance to merge IOs by IO scheduler
Previously, background GC submits many 4KB read requests to load victim blocks and/or its (i)node blocks.
... f2fs_gc : f2fs_readpage: ino = 1, page
f2fs: give a chance to merge IOs by IO scheduler
Previously, background GC submits many 4KB read requests to load victim blocks and/or its (i)node blocks.
... f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb61, blkaddr = 0x3b964ed f2fs_gc : block_rq_complete: 8,16 R () 499854968 + 8 [0] f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb6f, blkaddr = 0x3b964ee f2fs_gc : block_rq_complete: 8,16 R () 499854976 + 8 [0] f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb79, blkaddr = 0x3b964ef f2fs_gc : block_rq_complete: 8,16 R () 499854984 + 8 [0] ...
However, by the fact that many IOs are sequential, we can give a chance to merge the IOs by IO scheduler. In order to do that, let's use blk_plug.
... f2fs_gc : f2fs_iget: ino = 143 f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c6, blkaddr = 0x2e6ee f2fs_gc : f2fs_iget: ino = 143 f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c7, blkaddr = 0x2e6ef <idle> : block_rq_complete: 8,16 R () 1519616 + 8 [0] <idle> : block_rq_complete: 8,16 R () 1519848 + 8 [0] <idle> : block_rq_complete: 8,16 R () 1520432 + 96 [0] <idle> : block_rq_complete: 8,16 R () 1520536 + 104 [0] <idle> : block_rq_complete: 8,16 R () 1521008 + 112 [0] <idle> : block_rq_complete: 8,16 R () 1521440 + 152 [0] <idle> : block_rq_complete: 8,16 R () 1521688 + 144 [0] <idle> : block_rq_complete: 8,16 R () 1522128 + 192 [0] <idle> : block_rq_complete: 8,16 R () 1523256 + 328 [0] ...
Note that this issue should be addressed in checkpoint, and some readahead flows too.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
Revision tags: v3.9-rc8 |
|
#
51dd6249 |
| 19-Apr-2013 |
Namjae Jeon <namjae.jeon@samsung.com> |
f2fs: add tracepoints for truncate operation
add tracepoints for tracing the truncate operations like truncate node/data blocks, f2fs_truncate etc.
Tracepoints are added at entry and exit of operat
f2fs: add tracepoints for truncate operation
add tracepoints for tracing the truncate operations like truncate node/data blocks, f2fs_truncate etc.
Tracepoints are added at entry and exit of operation to trace the success & failure of operation.
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Pankaj Kumar <pankaj.km@samsung.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> [Jaegeuk: combine and modify the tracepoint structures] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
f53f292e |
| 20-Apr-2013 |
H. Peter Anvin <hpa@linux.intel.com> |
Merge remote-tracking branch 'efi/chainsaw' into x86/efi
Resolved Conflicts: drivers/firmware/efivars.c fs/efivarsfs/file.c
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
Revision tags: v3.9-rc7, v3.9-rc6, v3.9-rc5, v3.9-rc4, v3.9-rc3, v3.9-rc2, v3.9-rc1, v3.8, v3.8-rc7, v3.8-rc6, v3.8-rc5, v3.8-rc4, v3.8-rc3, v3.8-rc2, v3.8-rc1, v3.7, v3.7-rc8, v3.7-rc7 |
|
#
39936837 |
| 22-Nov-2012 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: introduce a new global lock scheme
In the previous version, f2fs uses global locks according to the usage types, such as directory operations, block allocation, block write, and so on.
Refere
f2fs: introduce a new global lock scheme
In the previous version, f2fs uses global locks according to the usage types, such as directory operations, block allocation, block write, and so on.
Reference the following lock types in f2fs.h. enum lock_type { RENAME, /* for renaming operations */ DENTRY_OPS, /* for directory operations */ DATA_WRITE, /* for data write */ DATA_NEW, /* for data allocation */ DATA_TRUNC, /* for data truncate */ NODE_NEW, /* for node allocation */ NODE_TRUNC, /* for node truncate */ NODE_WRITE, /* for node write */ NR_LOCK_TYPE, };
In that case, we lose the performance under the multi-threading environment, since every types of operations must be conducted one at a time.
In order to address the problem, let's share the locks globally with a mutex array regardless of any types. So, let users grab a mutex and perform their jobs in parallel as much as possbile.
For this, I propose a new global lock scheme as follows.
0. Data structure - f2fs_sb_info -> mutex_lock[NR_GLOBAL_LOCKS] - f2fs_sb_info -> node_write
1. mutex_lock_op(sbi) - try to get an avaiable lock from the array. - returns the index of the gottern lock variable.
2. mutex_unlock_op(sbi, index of the lock) - unlock the given index of the lock.
3. mutex_lock_all(sbi) - grab all the locks in the array before the checkpoint.
4. mutex_unlock_all(sbi) - release all the locks in the array after checkpoint.
5. block_operations() - call mutex_lock_all() - sync_dirty_dir_inodes() - grab node_write - sync_node_pages()
Note that, the pairs of mutex_lock_op()/mutex_unlock_op() and mutex_lock_all()/mutex_unlock_all() should be used together.
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
49952fa1 |
| 03-Apr-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: reduce redundant spin_lock operations
This patch reduces redundant spin_lock operations in alloc_nid_failed(). The alloc_nid_failed() does not need to delete entry and add one again by trigger
f2fs: reduce redundant spin_lock operations
This patch reduces redundant spin_lock operations in alloc_nid_failed(). The alloc_nid_failed() does not need to delete entry and add one again by triggering spin_lock and spin_unlock redundantly.
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
b7473754 |
| 31-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: avoid race for summary information
In order to do GC more reliably, I'd like to lock the vicitm summary page until its GC is completed, and also prevent any checkpoint process.
Reviewed-by: N
f2fs: avoid race for summary information
In order to do GC more reliably, I'd like to lock the vicitm summary page until its GC is completed, and also prevent any checkpoint process.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
56ae674c |
| 30-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: remove redundant lock_page calls
In get_node_page, we do not need to call lock_page all the time.
If the node page is cached as uptodate,
1. grab_cache_page locks the page, 2. read_node_page
f2fs: remove redundant lock_page calls
In get_node_page, we do not need to call lock_page all the time.
If the node page is cached as uptodate,
1. grab_cache_page locks the page, 2. read_node_page unlocks the page, and 3. lock_page is called for further process.
Let's avoid this.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
dca3a783 |
| 01-Apr-2013 |
Jon Hunter <jon-hunter@ti.com> |
Merge commit '31d9adca82ce65e5c99d045b5fd917c702b6fce3' into tmp
Conflicts: arch/arm/plat-omap/dmtimer.c
|
#
79b5793b |
| 27-Mar-2013 |
Alexandru Gheorghiu <gheorghiuandru@gmail.com> |
f2fs: use kmemdup
Use kmemdup instead of kzalloc and memcpy.
Signed-off-by: Alexandru Gheorghiu <gheorghiuandru@gmail.com> Acked-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim
f2fs: use kmemdup
Use kmemdup instead of kzalloc and memcpy.
Signed-off-by: Alexandru Gheorghiu <gheorghiuandru@gmail.com> Acked-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
6aeedba2 |
| 29-Mar-2013 |
Jiri Kosina <jkosina@suse.cz> |
Merge tag v3.9-rc1 into for-3.9/upstream-fixes
This is done so that I am able to apply fix for commit 0322bd3980b3 ("usb hid quirks for Masterkit MA901 usb radio") which went into 3.9-rc1.
|
#
b3fecf8c |
| 27-Mar-2013 |
Jiri Kosina <jkosina@suse.cz> |
Merge branch 'for-3.10/hid-driver-transport-cleanups' into for-3.10/mt-hybrid-finger-pen
|
#
fa372417 |
| 20-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: remain nat cache entries for further free nid allocation
In the checkpoint flow, the f2fs investigates the total nat cache entries. Previously, if an entry has NULL_ADDR, f2fs drops the entry
f2fs: remain nat cache entries for further free nid allocation
In the checkpoint flow, the f2fs investigates the total nat cache entries. Previously, if an entry has NULL_ADDR, f2fs drops the entry and adds the obsolete nid to the free nid list. However, this free nid will be reused sooner, resulting in its nat entry miss. In order to avoid this, we don't need to drop the nat cache entry at this moment.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
04431c44 |
| 15-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: fix not to allocate max_nid
The build_free_nid should not add free nids over nm_i->max_nid. But, there was a hole that invalid free nid was added by the following scenario.
Let's suppose nm_i
f2fs: fix not to allocate max_nid
The build_free_nid should not add free nids over nm_i->max_nid. But, there was a hole that invalid free nid was added by the following scenario.
Let's suppose nm_i->max_nid = 150 and the last NAT page has 100 ~ 200 nids.
build_free_nids - get_current_nat_page loads the last NAT page - scan_nat_page can add 100 ~ 200 nids -> Bug here! So, when scanning an NAT page, we should check each candidate whether it is over max_nid or not.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
c3850aa1 |
| 13-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: fix return value of releasepage for node and data
If the return value of releasepage is equal to zero, the page cannot be reclaimed. Instead, we should return 1 in order to reclaim clean pages
f2fs: fix return value of releasepage for node and data
If the return value of releasepage is equal to zero, the page cannot be reclaimed. Instead, we should return 1 in order to reclaim clean pages.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
48cb76c7 |
| 13-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: scan next nat page to reuse free nids in there
When we build new free nids, let's scan the just next NAT page instead of skipping a couple of previously scanned pages in order to reuse free ni
f2fs: scan next nat page to reuse free nids in there
When we build new free nids, let's scan the just next NAT page instead of skipping a couple of previously scanned pages in order to reuse free nids in there. Otherwise, we can use too much wide range of nids even though several nids were deallocated, and also their node pages can be cached in the node_inode's address space. This means that we can retain lots of clean pages in the main memory, which induces mm's reclaiming overhead.
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
08d8058b |
| 13-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: should check the node page was truncated first
Currently, f2fs doesn't reclaim any node pages. However, if we found that a node page was truncated by checking its block address with zero durin
f2fs: should check the node page was truncated first
Currently, f2fs doesn't reclaim any node pages. However, if we found that a node page was truncated by checking its block address with zero during f2fs_write_node_page, we should not skip that node page and return zero to reclaim it.
Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
393ff91f |
| 08-Mar-2013 |
Jaegeuk Kim <jaegeuk.kim@samsung.com> |
f2fs: reduce unncessary locking pages during read
This patch reduces redundant locking and unlocking pages during read operations. In f2fs_readpage, let's use wait_on_page_locked() instead of lock_p
f2fs: reduce unncessary locking pages during read
This patch reduces redundant locking and unlocking pages during read operations. In f2fs_readpage, let's use wait_on_page_locked() instead of lock_page. And then, when we need to modify any data finally, let's lock the page so that we can avoid lock contention.
[readpage rule] - The f2fs_readpage returns unlocked page, or released page too in error cases. - Its caller should handle read error, -EIO, after locking the page, which indicates read completion. - Its caller should check PageUptodate after grab_cache_page.
Signed-off-by: Changman Lee <cm224.lee@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
0d4a42f6 |
| 19-Mar-2013 |
Daniel Vetter <daniel.vetter@ffwll.ch> |
Merge tag 'v3.9-rc3' into drm-intel-next-queued
Backmerge so that I can merge Imre Deak's coalesced sg entries fixes, which depend upon the new for_each_sg_page introduce in
commit a321e91b6d73ed01
Merge tag 'v3.9-rc3' into drm-intel-next-queued
Backmerge so that I can merge Imre Deak's coalesced sg entries fixes, which depend upon the new for_each_sg_page introduce in
commit a321e91b6d73ed011ffceed384c40d2785cf723b Author: Imre Deak <imre.deak@intel.com> Date: Wed Feb 27 17:02:56 2013 -0800
lib/scatterlist: add simple page iterator
The merge itself is just two trivial conflicts:
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
show more ...
|
#
d608d71c |
| 18-Mar-2013 |
Mauro Carvalho Chehab <mchehab@redhat.com> |
Merge tag 'v3.9-rc3' into v4l_for_linus
Linux 3.9-rc3
* tag 'v3.9-rc3': (11231 commits) Linux 3.9-rc3 perf,x86: fix link failure for non-Intel configs perf,x86: fix wrmsr_on_cpu() warning on
Merge tag 'v3.9-rc3' into v4l_for_linus
Linux 3.9-rc3
* tag 'v3.9-rc3': (11231 commits) Linux 3.9-rc3 perf,x86: fix link failure for non-Intel configs perf,x86: fix wrmsr_on_cpu() warning on suspend/resume Btrfs: fix warning of free_extent_map perf,x86: fix kernel crash with PEBS/BTS after suspend/resume ALSA: hda - Fix missing EAPD/GPIO setup for Cirrus codecs sound: sequencer: cap array index in seq_chn_common_event() mfd: twl4030-madc: Remove __exit_p annotation ALSA: hda/ca0132 - Remove extra setting of dsp_state. ALSA: hda/ca0132 - Check download state of DSP. ALSA: hda/ca0132 - Check if dspload_image succeeded. mm/fremap.c: fix possible oops on error path list: Fix double fetch of pointer in hlist_entry_safe() Btrfs: fix warning when creating snapshots Btrfs: return as soon as possible when edquot happens Btrfs: return EIO if we have extent tree corruption btrfs: use rcu_barrier() to wait for bdev puts at unmount Btrfs: remove btrfs_try_spin_lock Btrfs: get better concurrency for snapshot-aware defrag work hwmon: (pmbus/ltc2978) Fix temperature reporting ...
show more ...
|
#
25c0a6e5 |
| 01-Mar-2013 |
Namjae Jeon <namjae.jeon@samsung.com> |
f2fs: avoid extra ++ while returning from get_node_path
In all the breaking conditions in get_node_path, 'n' is used to track index in offset[] array, but while breaking out also, in all paths n++ i
f2fs: avoid extra ++ while returning from get_node_path
In all the breaking conditions in get_node_path, 'n' is used to track index in offset[] array, but while breaking out also, in all paths n++ is done. So, remove the ++ from breaking paths. Also, avoid reset of 'level=0' in first case.
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
3aa770a9 |
| 01-Mar-2013 |
Namjae Jeon <namjae.jeon@samsung.com> |
f2fs: optimize and change return path in lookup_free_nid_list
Optimize and change return path in lookup_free_nid_list
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahraw
f2fs: optimize and change return path in lookup_free_nid_list
Optimize and change return path in lookup_free_nid_list
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|
#
e0f56cb4 |
| 02-Feb-2013 |
Namjae Jeon <namjae.jeon@samsung.com> |
f2fs: optimize get node page readahead part
We can remove the call to find_get_page to get a page from the cache and check for up-to-date, instead we can make use of grab_cache_page part itself to f
f2fs: optimize get node page readahead part
We can remove the call to find_get_page to get a page from the cache and check for up-to-date, instead we can make use of grab_cache_page part itself to fetch the page from the cache. So, removing the call and moving the PageUptodate at proper place, also taken care of moving the lock_page condition in the page_hit part.
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
show more ...
|