Home
last modified time | relevance | path

Searched hist:"27 b1ea09" (Results 1 – 4 of 4) sorted by relevance

/openbmc/linux/arch/powerpc/platforms/cell/spufs/
H A Dcoredump.c27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
H A Dcontext.c27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
H A Dspufs.h27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
H A Dfile.c27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
27b1ea09 Fri Jul 20 14:39:34 CDT 2007 Christoph Hellwig <hch@lst.de> [CELL] spufs: make sure context are scheduled again after spu_acquire_saved

Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back. This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>