At least, on SPEAr platforms there is one peripheral, JPEG, which can be flow
controller for DMA transfer. Currently DMA controller driver didn't support
peripheral flow controller configurations.
This patch adds device_fc field in struct pl08x_channel_data, which will be used
only for slave transfers and is not used in case of mem2mem transfers.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When we have DMA transfers between peripheral and memory, then we shouldn't
reduce width of peripheral at all, as that may be a strict requirement. But we
can always reduce width of memory access, with some compromise in performance.
Thus, we must select peripheral as master and not memory.
Also this rearranges code to make it shorter.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently lli_len is aligned to min of two widths, which looks to be incorrect.
Instead it should be aligned to max of both widths.
Lets say, total_size = 441 bytes
MIN: lets check if min() suits or not:
CASE 1: srcwidth = 1, dstwidth = 4
min(src, dst) = 1
i.e. We program transfer size in control reg to 441.
Now, till 440 bytes everything is fine, but on the last byte DMAC can't transfer
1 byte to dst, as its width is 4.
CASE 2: srcwidth = 4, dstwidth = 1
min(src, dst) = 1
i.e. we program transfer size in control reg to 110 (data transferred = 110 * srcwidth).
So, here too 1 byte is left, but on the source side.
MAX: Lets check if max() suits or not:
CASE 3: srcwidth = 1, dstwidth = 4
max(src, dst) = 4
Aligned size is 440
i.e. We program transfer size in control reg to 440.
Now, all 440 bytes will be transferred without any issues.
CASE 4: srcwidth = 4, dstwidth = 1
max(src, dst) = 4
Aligned size is 440
i.e. We program transfer size in control reg to 110 (data transferred = 110 * srcwidth).
Now, also all 440 bytes will be transferred without any issues.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Code for creating single byte llis is present at several places. Create a
routine to avoid code redundancy.
Also, we don't need one lli per single byte transfer, we can have single lli to
do all single byte transfer.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
max_bytes_per_lli = bd.srcbus.buswidth * PL080_CONTROL_TRANSFER_SIZE_MASK;
This is confirmed by ARM support guys.
Below is summary of mail exchange with them:
[Viresh] What is the total data to be transferred in case source and destination
bus widths are different. Suppose, source bus width is 2 bytes and destination
is 4 bytes. Now in order to transfer 80 bytes, what should be value of
TransferSize field in control reg: 40? or 20?.
[David from ARM] The value that is programmed into the TransferSize field should
be the number of <SourceWidth> transfers needed to achieve the required data
transfer.
So, to transfer 80 bytes, with a Source Width of 2, the TransferSize field =
should be programmed with:
Total transfer size
------------------- = 40
<source width>
[Viresh] Will this change if source is 4 bytes and dest is 2?
[David] Yes - the calculation then becomes:
Total transfer size
------------------- =20
<source width>
Also, max_bytes_per_lli must be calculated after fixing src and dest widths not
before that. So move this code to the correct place.
This patch also removes max_bytes_per_lli from earlier print message, as till
that point max_bytes_per_lli is unknown.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pl080 Manual says: "Bursts do not cross the 1KB address boundary"
We can program the controller to cross 1 KB boundary on a burst and controller
can take care of this boundary condition by itself.
Following is the discussion with ARM Technical Support Guys (David):
[Viresh] Manual says: "Bursts do not cross the 1KB address boundary"
What does that actually mean? As, Maximum size transferable with a single LLI is
4095 * 4 =16380 ~ 16KB. So, if we don't have src/dest address aligned to burst
size, we can't use this big of an LLI.
[David] There is a difference between bursts describing the total data
transferred by the DMA controller and AHB bursts. Bursts described by the
programmable parameters in the PL080 have no direct connection with the bursts
that are seen on the AHB bus.
The statement that "Bursts do not cross the 1KB address boundary" in the TRM is
referring to AHB bursts, where this limitation is a requirement of the AHB spec.
You can still issue bursts within the PL080 that are in excess of 1KB. The
PL080 will make sure that its bursts are broken down into legal AHB bursts which
will be formatted to ensure that no AHB burst crosses a 1KB boundary.
Based on above discussion, this patch removes all code related to 1 KB boundary
as we are not required to handle this in driver.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, if error interrupt occurs, nothing is done in interrupt handler (just
clearing the interrupts). We must somehow indicate this to the user that DMA is
over, due to ERR interrupt or TC interrupt.
So, this patch just schedules existing tasklet, with a print showing error
interrupt has occurred on which channels.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We have just executed following in pl08x_get_phy_channel():
ch->signal = -1;
We don't have to compare "ch->signal < 0", as this will always be true.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Simply writing 1 on bit 0 is sufficient instead of reading and clearing bits.
Also as per manual, for bit 3-31 of DMACConfiguration register:
"read undefined, write as 0"
So, we must not rely on values read from this registers bit 3-31.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Insert notifiers for the runtime PM API. With this the runtime PM layer kicks in
to action where used.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For 8 memory and 16 slave channels 35 boot print lines are printed. And that is
too much. Most of this would be more useful for debugging. So moving few of them
to dev_dbg instead of dev_info. Now only 3 prints will be printed.
This also rearrange one of the debug message to fit into two lines.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Similar comment is present over routine also pl08x_choose_master_bus(). Keeping
one of them. Also rewrite that comment to convey message clearly.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As mentioned in Documentation/CodingStyle,
The preferred form for passing a size of a struct is the following:
p = kmalloc(sizeof(*p), ...);
The alternative form where struct name is spelled out hurts readability and
introduces an opportunity for a bug when the pointer variable type is changed
but the corresponding sizeof that is passed to a memory allocator is not.
This patch replaces (struct xyz) with *ptr at several occurrences in driver.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Header files included in driver are not present in alphabetical order. Rearrange
them in alphabetical order.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There were few formatting related issues in code. This patch fixes them.
Fixes include:
- Remove extra blank lines
- align code to 80 cols
- combine several lines to one line
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
<asm/hardware/pl080.h> doesn't have protection to deal with multiple inclusion.
And so we get compilation errors in cases where this file is included more than
once. This patch adds #ifdefs at the top of file to protect it against multiple
inclusions.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In this driver, we can trigger cyclic transfer on peripherals-DMA interfaces.
It is dependent on driver implementation but cannot depend on a platform
property: we remove the dma_has_cap(DMA_CYCLIC, ) test which has no meaning.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cyclic property and paused state are encoded as bits in the channel status
bitfield. Tests of those bits are wrapped in convenient helper functions.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Save/restore dma controller state across a suspend-resume sequence.
The prepare() function will wait for the non-cyclic channels to become idle.
It also deals with cyclic operations with the start at next period while
resuming.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dmaengine routines can be called from interrupt context and with
interrupts disabled. Whereas spin_unlock_bh can't be called from
such contexts. So this patch converts all spin_lock* routines
to irqsave variants.
Also, spin_lock() used in tasklet is converted to irqsave variants,
as tasklet can be interrupted, and dma requests from such interruptions
may also call spin_lock.
Idea from dw_dmac patch by Viresh Kumar.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After calling mxs_dma_disable_chan() for a channel, that channel
becomes unusable because some controller registers can only be written
when the clock is enabled via CLKGATE.
Signed-off-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 90b44f8 introduces dmaengine_prep_slave_single API which adds
scatterlist.h in dmaengine.h, so defining struct scatterlist is not required
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
In case, some error occurs while doing memcpy transfers, we must terminate all
transfers physically too. This is achieved by calling device_control() routine
with TERMINATE_ALL as parameter.
This is also required to be done in case module is removed while we are in
middle of some transfers.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For clients which require a single slave transfer and dont want to be bothered
about the scatterlist api, this helper gives simple API for this transfer and
creates single scatterlist for DMA API
Idea from Russell King
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit d006199e72a9 ("serial: sh-sci: Regtype probing doesn't need to be
fatal.") made sci_init_single() return when sci_probe_regmap() succeeds,
although it should return when sci_probe_regmap() fails. This causes
systems using the serial sh-sci driver to crash during boot.
Fix the problem by using the right return condition.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The generic library code already exports the generic function, this was
left-over from the ARM-specific version that just got removed.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since commit 1eb19a12bd ("lib/sha1: use the git implementation of
SHA-1"), the ARM SHA1 routines no longer work. The reason? They
depended on the larger 320-byte workspace, and now the sha1 workspace is
just 16 words (64 bytes). So the assembly version would overwrite the
stack randomly.
The optimized asm version is also probably slower than the new improved
C version, so there's no reason to keep it around. At least that was
the case in git, where what appears to be the same assembly language
version was removed two years ago because the optimized C BLK_SHA1 code
was faster.
Reported-and-tested-by: Joachim Eastwood <manabian@gmail.com>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
task->cred is declared as __rcu, and access to other tasks' ->cred is,
indeed, protected. Access to current->cred does not need rcu_dereference()
at all, since only the task itself can change its ->cred. sparse, of
course, has no way of knowing that...
Add force-cast in current_cred(), make current_fsuid() et.al. use it.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Al points out that the do_follow_link() helper function really is
misnamed - it's about whether we should try to follow a symlink or not,
not about actually doing the following.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After commit 3567866bf2: "RCUify freeing acls, let check_acl() go ahead in
RCU mode if acl is cached" posix_acl_permission is being called with an
unsupported flag and the permission check fails. This patch fixes the issue.
Signed-off-by: Ari Savolainen <ari.m.savolainen@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* 'for-linus' of git://git.open-osd.org/linux-open-osd:
ore: Make ore its own module
exofs: Rename raid engine from exofs/ios.c => ore
exofs: ios: Move to a per inode components & device-table
exofs: Move exofs specific osd operations out of ios.c
exofs: Add offset/length to exofs_get_io_state
exofs: Fix truncate for the raid-groups case
exofs: Small cleanup of exofs_fill_super
exofs: BUG: Avoid sbi realloc
exofs: Remove pnfs-osd private definitions
nfs_xdr: Move nfs4_string definition out of #ifdef CONFIG_NFS_V4
The inode structure layout is largely random, and some of the vfs paths
really do care. The path lookup in particular is already quite D$
intensive, and profiles show that accessing the 'inode->i_op->xyz'
fields is quite costly.
We already optimized the dcache to not unnecessarily load the d_op
structure for members that are often NULL using the DCACHE_OP_xyz bits
in dentry->d_flags, and this does something very similar for the inode
ops that are used during pathname lookup.
It also re-orders the fields so that the fields accessed by 'stat' are
together at the beginning of the inode structure, and roughly in the
order accessed.
The effect of this seems to be in the 1-2% range for an empty kernel
"make -j" run (which is fairly kernel-intensive, mostly in filename
lookup), so it's visible. The numbers are fairly noisy, though, and
likely depend a lot on exact microarchitecture. So there's more tuning
to be done.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Gcc tends to generate better code with small integers, including the
DCACHE_xyz flag tests - so move the common ones to be first in the list.
Also just remove the unused DCACHE_INOTIFY_PARENT_WATCHED and
DCACHE_AUTOFS_PENDING values, their users no longer exists in the source
tree.
And add a "unlikely()" to the DCACHE_OP_COMPARE test, since we want the
common case to be a nice straight-line fall-through.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
net: Compute protocol sequence numbers and fragment IDs using MD5.
crypto: Move md5_transform to lib/md5.c
Export everything from ore need exporting. Change Kbuild and Kconfig
to build ore.ko as an independent module. Import ore from exofs
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
ORE stands for "Objects Raid Engine"
This patch is a mechanical rename of everything that was in ios.c
and its API declaration to an ore.c and an osd_ore.h header. The ore
engine will later be used by the pnfs objects layout driver.
* File ios.c => ore.c
* Declaration of types and API are moved from exofs.h to a new
osd_ore.h
* All used types are prefixed by ore_ from their exofs_ name.
* Shift includes from exofs.h to osd_ore.h so osd_ore.h is
independent, include it from exofs.h.
Other than a pure rename there are no other changes. Next patch
will move the ore into it's own module and will export the API
to be used by exofs and later the layout driver
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Exofs raid engine was saving on memory space by having a single layout-info,
single pid, and a single device-table, global to the filesystem. Then passing
a credential and object_id info at the io_state level, private for each
inode. It would also devise this contraption of rotating the device table
view for each inode->ino to spread out the device usage.
This is not compatible with the pnfs-objects standard, demanding that
each inode can have it's own layout-info, device-table, and each object
component it's own pid, oid and creds.
So: Bring exofs raid engine to be usable for generic pnfs-objects use by:
* Define an exofs_comp structure that holds obj_id and credential info.
* Break up exofs_layout struct to an exofs_components structure that holds a
possible array of exofs_comp and the array of devices + the size of the
arrays.
* Add a "comps" parameter to get_io_state() that specifies the ids creds
and device array to use for each IO.
This enables to keep the layout global, but the device-table view, creds
and IDs at the inode level. It only adds two 64bit to each inode, since
some of these members already existed in another form.
* ios raid engine now access layout-info and comps-info through the passed
pointers. Everything is pre-prepared by caller for generic access of
these structures and arrays.
At the exofs Level:
* Super block holds an exofs_components struct that holds the device
array, previously in layout. The devices there are in device-table
order. The device-array is twice bigger and repeats the device-table
twice so now each inode's device array can point to a random device
and have a round-robin view of the table, making it compatible to
previous exofs versions.
* Each inode has an exofs_components struct that is initialized at
load time, with it's own view of the device table IDs and creds.
When doing IO this gets passed to the io_state together with the
layout.
While preforming this change. Bugs where found where credentials with the
wrong IDs where used to access the different SB objects (super.c). As well
as some dead code. It was never noticed because the target we use does not
check the credentials.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
ios.c will be moving to an external library, for use by the
objects-layout-driver. Remove from it some exofs specific functions.
Also g_attr_logical_length is used both by inode.c and ios.c
move definition to the later, to keep it independent
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
In future raid code we will need to know the IO offset/length
and if it's a read or write to determine some of the array
sizes we'll need.
So add a new exofs_get_rw_state() API for use when
writeing/reading. All other simple cases are left using the
old way.
The major change to this is that now we need to call
exofs_get_io_state later at inode.c::read_exec and
inode.c::write_exec when we actually know these things. So this
patch is kept separate so I can test things apart from other
changes.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Computers have become a lot faster since we compromised on the
partial MD4 hash which we use currently for performance reasons.
MD5 is a much safer choice, and is inline with both RFC1948 and
other ISS generators (OpenBSD, Solaris, etc.)
Furthermore, only having 24-bits of the sequence number be truly
unpredictable is a very serious limitation. So the periodic
regeneration and 8-bit counter have been removed. We compute and
use a full 32-bit sequence number.
For ipv6, DCCP was found to use a 32-bit truncated initial sequence
number (it needs 43-bits) and that is fixed here as well.
Reported-by: Dan Kaminsky <dan@doxpara.com>
Tested-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
cifs: cope with negative dentries in cifs_get_root
cifs: convert prefixpath delimiters in cifs_build_path_to_root
CIFS: Fix missing a decrement of inFlight value
cifs: demote DFS referral lookup errors to cFYI
Revert "cifs: advertise the right receive buffer size to the server"
* 'stable/bug.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
xen/trace: Fix compile error when CONFIG_XEN_PRIVILEGED_GUEST is not set
xen: Fix misleading WARN message at xen_release_chunk
xen: Fix printk() format in xen/setup.c
xen/tracing: it looks like we wanted CONFIG_FTRACE
xen/self-balloon: Add dependency on tmem.
xen/balloon: Fix compile errors - missing header files.
xen/grant: Fix compile warning.
xen/pciback: remove duplicated #include