A node starting before the minimum register is no reason to reject it,
since its end could be in range. The check for the end already exists
two lines lower, so we can just remove the incorrect check.
Signed-off-by: Maarten ter Huurne <maarten@treewalker.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
The parameter passed to the regmap lock/unlock callbacks needs to be
map->lock_arg, regcache passes just map. This works fine in the case that no
custom locking callbacks are used since in this case map->lock_arg equals map,
but will break when custom locking callbacks are used. The issue was introduced
in commit 0d4529c5("regmap: make lock/unlock functions customizable") and is
fixed by this patch.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
_regmap_raw_write() contains code to call regcache_write() to write
values to the cache. That code calls memcpy() to copy the value data to
the start of the work_buf. However, at least when _regmap_raw_write() is
called from _regmap_bus_raw_write(), the value data is in the work_buf,
and this memcpy() operation may over-write part of that value data,
depending on the value of reg_bytes + pad_bytes. At least when using
reg_bytes==1 and pad_bytes==0, corruption of the value data does occur.
To solve this, remove the memcpy() operation, and modify the subsequent
.parse_val() call to parse the original value buffer directly.
At least in the case of 8-bit register address and 16-bit values, and
writes of single registers at a time, this memcpy-then-parse combination
used to cancel each-other out; for a work-buffer containing xx 89 03,
the memcpy changed it to 89 03 03, and the parse_val changed it back to
89 89 03, thus leaving the value uncorrupted. This appears completely
accidental though. Since commit 8a819ff "regmap: core: Split out in
place value parsing", .parse_val only returns the parsed value, and does
not modify the buffer, and hence does not (accidentally) undo the
corruption caused by memcpy(). This caused bogus values to get written
to HW, thus preventing e.g. audio playback on systems with a WM8903
CODEC. This patch fixes that.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
This reverts commit bc8ce4 (regmap: don't corrupt work buffer in
_regmap_raw_write()) since it turns out that it can cause issues when
taken in isolation from the other changes in -next that lead to its
discovery. On the basis that nobody noticed the problems for quite some
time without that subsequent work let's drop it from v3.9.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Fix format specifier in dev_dbg and suppress the following warning
drivers/base/regmap/regcache.c: In function
‘regcache_sync_block_raw_flush’:
drivers/base/regmap/regcache.c:593:2: warning: format ‘%d’ expects
argument of type ‘int’, but argument 4 has type ‘size_t’ [-Wformat]
Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
regcache_sync_block_raw is used only in this file. Hence make it static.
Silences the following warning:
drivers/base/regmap/regcache.c:608:5: warning:
symbol 'regcache_sync_block_raw' was not declared. Should it be static?
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
When syncing blocks of data using raw writes combine the writes into a
single block write, saving us bus overhead for setup, addressing and
teardown.
Currently the block write is done unconditionally as it is expected that
hardware which has a register format which can support raw writes will
support auto incrementing writes, this decision may need to be revised in
future.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
For code clarity after implementing block writes split out the raw and
non-raw I/O sync implementations.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
The idea of holding blocks of registers in device format is shared between
at least rbtree and lzo cache formats so split out the loop that does the
sync from the rbtree code so optimisations on it can be reused.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
The idea of maintaining a bitmap of present registers is something that
can usefully be used by other cache types that maintain blocks of cached
registers so move the code out of the rbtree cache and into the generic
regcache code.
Refactor the interface slightly as we go to wrap the set bit and enlarge
bitmap operations (since we never do one without the other) and make it
more robust for reads of uncached registers by bounds checking before we
look at the bitmap.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
This will bring no meaningful benefit by itself, it is done as a separate
commit to aid bisection if there are problems with the following commits
adding support for coalescing adjacent writes.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Mainly useful internally but exported since this is a public API that's
being checked for.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Provide a helper to do the size based index into a block of registers and
use it when reading a value.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
This patch aims to bring down the average number of nodes
in the rbtree cache and increase the average number of registers
per node. This should improve general lookup and traversal times.
This is achieved by setting the minimum size of a block within the
rbnode to the size of the rbnode itself. This will essentially
cache possibly non-existent registers so to combat this scenario,
we keep a separate bitmap in memory which keeps track of which register
exists. The memory overhead of this change is likely in the order of
~5-10%, possibly less depending on the register file layout. On my test
system with a bitmap of ~4300 bits and a relatively sparse register
layout, the memory requirements for the entire cache did not increase
(the cutting down of nodes which was about 50% of the original number
compensated the situation).
A second patch that can be built on top of this can look at the
ratio `sizeof(*rbnode) / map->cache_word_size' in order to suitably
adjust the block length of each block.
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
This allows the cache to sync values directly to the device when stored
in native format and also allows asynchronous I/O.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
_regmap_raw_write() contains code to call regcache_write() to write
values to the cache. That code calls memcpy() to copy the value data to
the start of the work_buf. However, at least when _regmap_raw_write() is
called from _regmap_bus_raw_write(), the value data is in the work_buf,
and this memcpy() operation may over-write part of that value data,
depending on the value of reg_bytes + pad_bytes. At least when using
reg_bytes==1 and pad_bytes==0, corruption of the value data does occur.
To solve this, remove the memcpy() operation, and modify the subsequent
.parse_val() call to parse the original value buffer directly.
At least in the case of 8-bit register address and 16-bit values, and
writes of single registers at a time, this memcpy-then-parse combination
used to cancel each-other out; for a work-buffer containing xx 89 03,
the memcpy changed it to 89 03 03, and the parse_val changed it back to
89 89 03, thus leaving the value uncorrupted. This appears completely
accidental though. Since commit 8a819ff "regmap: core: Split out in
place value parsing", .parse_val only returns the parsed value, and does
not modify the buffer, and hence does not (accidentally) undo the
corruption caused by memcpy(). This caused bogus values to get written
to HW, thus preventing e.g. audio playback on systems with a WM8903
CODEC. This patch fixes that.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Display the name for the chip rather than just the primary IRQ so it is
clearer what exactly has failed.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
The last register block, which falls into the specified range, is not handled
correctly. The formula which calculates the number of register which should be
synced is inverse (and off by one). E.g. if all registers in that block should
be synced only one is synced, and if only one should be synced all (but one) are
synced. To calculate the number of registers that need to be synced we need to
subtract the number of the first register in the block from the max register
number and add one. This patch updates the code accordingly.
The issue was introduced in commit ac8d91c ("regmap: Supply ranges to the sync
operations").
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: stable@vger.kernel.org
Provide a feel of how much overhead the rbtree cache adds to
the game.
[Slightly reworded output in debugfs -- broonie]
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
In the rbtree code we are exposing statistics relating to the
number of nodes/registers of the rbtree cache for each of the
devices. Ensure that `map->debugfs' has been initialized before
we attempt to initialize the debugfs entry for the rbtree cache.
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: stable@vger.kernel.org
A simple fix to stop us leaking a runtime PM reference in the case where
we fail to enable a device.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJRNVQ5AAoJELSic+t+oim93WQQAJIToyJgnuoZfebD3vgT1Tey
YVGM5YY0pL+Ec5Fg91vu/ypFaY888J3UlRtQGxEM13grPunR4y/OflRYAXXTnspW
TcPbcWpkEv464iTQra2GY9Z4gqL9c6fKKBSFwrj74wRb+Jq0BQhrdmbw6U6pMnDS
iAxngfYEdlIULy8gyGnAszJFrQWjYh4U4e7wnUlsOJoZbc7JpW/6ITslwG9PWwK7
h+o7ekjn2anyjAqBStlnSOzQ12kcaam+cDh8Fa8TUmg3HTmFmuCytGA8+XwCVBSQ
ndWIhL1bqeyk7MdS84HjatNRAfPtpSZ9ouxKvLHm/tgALTNt/7CIsXeCm+2OoCQU
7uFJ01WnAstQ58ggEndgjvhr4wGRIp9VZXyVjm8tqH2CLT/UE7H+nnOAcABcd/cn
jZ+t8DQHU2ST1Rvs4Mohax8K6XcOTEQLp/kuhPEUXyqsv73VqIsjloPtqcLbUQdA
RYjMMsSFVFqlPQEOBTDNhGVjrfI4/tlkEh7Kw4VXSZXqf8cvTrAvbWYmMV/MJu2M
pvncD872/jSatRbj5qocnUbOuEyQe3UmdBNtQrdWgseI1z0fyz41X/VvZlzgt+Ll
se8iU4YojEviAUjPzKbKpFwr98r6pmMXtHqxDCYSv47YukiCC5QMenFukMGE5G9R
2qSw38quY1edJiXnq42Y
=D1mN
-----END PGP SIGNATURE-----
Merge tag 'regmap-v3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap PM fix from Mark Brown:
"A simple fix to stop us leaking a runtime PM reference in the case
where we fail to enable a device."
* tag 'regmap-v3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: irq: call pm_runtime_put in pm_runtime_get_sync failed case
This file lists the register ranges in the register map. The condition
to split the range is based on whether the block is readable or not.
Ensure that we lock the `debugfs_off_cache' list whenever we access
and modify the list. There is a possible race otherwise between the
read() operations of the `registers' file and the `range' file.
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
We don't need to use any of the file position information
to calculate the base and max register of each block. Just
use the counter directly.
Set `i = base' at the top to avoid GCC flow analysis bugs. The
value of `i' can never be undefined or 0 in the if (c) { ... }.
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Currently the value parsing operations both return the parsed value and
modify the passed buffer. This precludes their use in places like the cache
code so split out the in place modification into a new parse_inplace()
operation.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
It's more idiomatic to pass the map structure around and this means we
can use other bits of information from the map.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
If we're updating a value in place it's more work to read the value and
compare the value with what we're about to set than it is to just write
the value into the cache; there are no further operations after writing
in the code even though there's an early return here.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Trace when we start and complete async writes, and when we start and
finish blocking for their completion. This is useful for performance
analysis of the resulting I/O patterns.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Even in failed case of pm_runtime_get_sync, the usage_count
is incremented. In order to keep the usage_count with correct
value and runtime power management to behave correctly, call
pm_runtime_put(_sync) in such case.
Signed-off-by Liu Chuansheng <chuansheng.liu@intel.com>
Signed-off-by: Li Fei <fei.li@intel.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
lockdep, but it's a mechanical change.
Cheers,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJRJAcuAAoJENkgDmzRrbjxsw0P/3eXb+LddYnx0V0uHYdKpCUf
4vdW7X0fX3Z+aUK69IWRL/6ahoO4TpaHYGHBDjEoivyQ0GDq14X7JNWsYYt3LdMf
3wmDgRc2cn/mZOJbFeVpNV8ox5l/xc0CUvV+iQ8tMjfQItXMXgWUFZKMECsXKSO6
eex3lrw9M2jAX2uL8LQPp9W8xtKu24nSZRC6tH5riE/8fCzi1cZPPAqfxP5c8Lee
ZXtbCRSyAFENZLpKyMe1PC7HvtJyi5NDn9xwOQiXULZV/VOlvP94DGBLIKCM/6dn
4QvZxpG0P0uOlpCgRAVLyh/z7g4XY4VF/fHopLCmEcqLsvgD+V2LQpQ9zWUalLPC
Z+pUpz2vu0gIddPU1nR8R6oGpEdJ8O12aJle62p/RSXWZGx12qUQ+Tamu0tgKcv1
AsiJfbUGNDYfxgU6sHsoQjl2f68LTVckCU1C1LqEbW/S104EIORtGx30CHM4LRiO
32kDC5TtgYDBKQAIqJ4bL48ZMh+9W3uX40p7xzOI5khHQjvswUKa3jcxupU0C1uv
lx8KXo7pn8WT33QGysWC782wJCgJuzSc2vRn+KQoqoynuHGM6agaEtR59gil3QWO
rQEcxH63BBRDgHlg4FM9IkJwwsnC3PWKL8gbX0uAWXAPMbgapJkuuGZAwt0WDGVK
+GszxsFkCjlW0mK0egTb
=tiSY
-----END PGP SIGNATURE-----
Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull module update from Rusty Russell:
"The sweeping change is to make add_taint() explicitly indicate whether
to disable lockdep, but it's a mechanical change."
* tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
MODSIGN: Add option to not sign modules during modules_install
MODSIGN: Add -s <signature> option to sign-file
MODSIGN: Specify the hash algorithm on sign-file command line
MODSIGN: Simplify Makefile with a Kconfig helper
module: clean up load_module a little more.
modpost: Ignore ARC specific non-alloc sections
module: constify within_module_*
taint: add explicit flag to show whether lock dep is still OK.
module: printk message when module signature fail taints kernel.
Some mmio devices have a dedicated interface clock that needs
to be enabled to access their registers. This patch optionally
enables a clock before accessing registers in the regmap_bus
callbacks.
I added (devm_)regmap_init_mmio_clk variants of the init
functions that have an added clk_id string parameter. This
is passed to clk_get to request the clock from the clk
framework.
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>