1===========================
2Hardware Spinlock Framework
3===========================
4
5Introduction
6============
7
8Hardware spinlock modules provide hardware assistance for synchronization
9and mutual exclusion between heterogeneous processors and those not operating
10under a single, shared operating system.
11
12For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
13each of which is running a different Operating System (the master, A9,
14is usually running Linux and the slave processors, the M3 and the DSP,
15are running some flavor of RTOS).
16
17A generic hwspinlock framework allows platform-independent drivers to use
18the hwspinlock device in order to access data structures that are shared
19between remote processors, that otherwise have no alternative mechanism
20to accomplish synchronization and mutual exclusion operations.
21
22This is necessary, for example, for Inter-processor communications:
23on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
24remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink).
25
26To achieve fast message-based communications, a minimal kernel support
27is needed to deliver messages arriving from a remote processor to the
28appropriate user process.
29
30This communication is based on simple data structures that is shared between
31the remote processors, and access to it is synchronized using the hwspinlock
32module (remote processor directly places new messages in this shared data
33structure).
34
35A common hwspinlock interface makes it possible to have generic, platform-
36independent, drivers.
37
38User API
39========
40
41::
42
43  struct hwspinlock *hwspin_lock_request(void);
44
45Dynamically assign an hwspinlock and return its address, or NULL
46in case an unused hwspinlock isn't available. Users of this
47API will usually want to communicate the lock's id to the remote core
48before it can be used to achieve synchronization.
49
50Should be called from a process context (might sleep).
51
52::
53
54  struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
55
56Assign a specific hwspinlock id and return its address, or NULL
57if that hwspinlock is already in use. Usually board code will
58be calling this function in order to reserve specific hwspinlock
59ids for predefined purposes.
60
61Should be called from a process context (might sleep).
62
63::
64
65  int of_hwspin_lock_get_id(struct device_node *np, int index);
66
67Retrieve the global lock id for an OF phandle-based specific lock.
68This function provides a means for DT users of a hwspinlock module
69to get the global lock id of a specific hwspinlock, so that it can
70be requested using the normal hwspin_lock_request_specific() API.
71
72The function returns a lock id number on success, -EPROBE_DEFER if
73the hwspinlock device is not yet registered with the core, or other
74error values.
75
76Should be called from a process context (might sleep).
77
78::
79
80  int hwspin_lock_free(struct hwspinlock *hwlock);
81
82Free a previously-assigned hwspinlock; returns 0 on success, or an
83appropriate error code on failure (e.g. -EINVAL if the hwspinlock
84is already free).
85
86Should be called from a process context (might sleep).
87
88::
89
90  int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id);
91
92After verifying the owner of the hwspinlock, release a previously acquired
93hwspinlock; returns 0 on success, or an appropriate error code on failure
94(e.g. -EOPNOTSUPP if the bust operation is not defined for the specific
95hwspinlock).
96
97Should be called from a process context (might sleep).
98
99::
100
101  int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
102
103Lock a previously-assigned hwspinlock with a timeout limit (specified in
104msecs). If the hwspinlock is already taken, the function will busy loop
105waiting for it to be released, but give up when the timeout elapses.
106Upon a successful return from this function, preemption is disabled so
107the caller must not sleep, and is advised to release the hwspinlock as
108soon as possible, in order to minimize remote cores polling on the
109hardware interconnect.
110
111Returns 0 when successful and an appropriate error code otherwise (most
112notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
113The function will never sleep.
114
115::
116
117  int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
118
119Lock a previously-assigned hwspinlock with a timeout limit (specified in
120msecs). If the hwspinlock is already taken, the function will busy loop
121waiting for it to be released, but give up when the timeout elapses.
122Upon a successful return from this function, preemption and the local
123interrupts are disabled, so the caller must not sleep, and is advised to
124release the hwspinlock as soon as possible.
125
126Returns 0 when successful and an appropriate error code otherwise (most
127notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
128The function will never sleep.
129
130::
131
132  int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
133				  unsigned long *flags);
134
135Lock a previously-assigned hwspinlock with a timeout limit (specified in
136msecs). If the hwspinlock is already taken, the function will busy loop
137waiting for it to be released, but give up when the timeout elapses.
138Upon a successful return from this function, preemption is disabled,
139local interrupts are disabled and their previous state is saved at the
140given flags placeholder. The caller must not sleep, and is advised to
141release the hwspinlock as soon as possible.
142
143Returns 0 when successful and an appropriate error code otherwise (most
144notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
145
146The function will never sleep.
147
148::
149
150  int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout);
151
152Lock a previously-assigned hwspinlock with a timeout limit (specified in
153msecs). If the hwspinlock is already taken, the function will busy loop
154waiting for it to be released, but give up when the timeout elapses.
155
156Caution: User must protect the routine of getting hardware lock with mutex
157or spinlock to avoid dead-lock, that will let user can do some time-consuming
158or sleepable operations under the hardware lock.
159
160Returns 0 when successful and an appropriate error code otherwise (most
161notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
162
163The function will never sleep.
164
165::
166
167  int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
168
169Lock a previously-assigned hwspinlock with a timeout limit (specified in
170msecs). If the hwspinlock is already taken, the function will busy loop
171waiting for it to be released, but give up when the timeout elapses.
172
173This function shall be called only from an atomic context and the timeout
174value shall not exceed a few msecs.
175
176Returns 0 when successful and an appropriate error code otherwise (most
177notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
178
179The function will never sleep.
180
181::
182
183  int hwspin_trylock(struct hwspinlock *hwlock);
184
185
186Attempt to lock a previously-assigned hwspinlock, but immediately fail if
187it is already taken.
188
189Upon a successful return from this function, preemption is disabled so
190caller must not sleep, and is advised to release the hwspinlock as soon as
191possible, in order to minimize remote cores polling on the hardware
192interconnect.
193
194Returns 0 on success and an appropriate error code otherwise (most
195notably -EBUSY if the hwspinlock was already taken).
196The function will never sleep.
197
198::
199
200  int hwspin_trylock_irq(struct hwspinlock *hwlock);
201
202
203Attempt to lock a previously-assigned hwspinlock, but immediately fail if
204it is already taken.
205
206Upon a successful return from this function, preemption and the local
207interrupts are disabled so caller must not sleep, and is advised to
208release the hwspinlock as soon as possible.
209
210Returns 0 on success and an appropriate error code otherwise (most
211notably -EBUSY if the hwspinlock was already taken).
212
213The function will never sleep.
214
215::
216
217  int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
218
219Attempt to lock a previously-assigned hwspinlock, but immediately fail if
220it is already taken.
221
222Upon a successful return from this function, preemption is disabled,
223the local interrupts are disabled and their previous state is saved
224at the given flags placeholder. The caller must not sleep, and is advised
225to release the hwspinlock as soon as possible.
226
227Returns 0 on success and an appropriate error code otherwise (most
228notably -EBUSY if the hwspinlock was already taken).
229The function will never sleep.
230
231::
232
233  int hwspin_trylock_raw(struct hwspinlock *hwlock);
234
235Attempt to lock a previously-assigned hwspinlock, but immediately fail if
236it is already taken.
237
238Caution: User must protect the routine of getting hardware lock with mutex
239or spinlock to avoid dead-lock, that will let user can do some time-consuming
240or sleepable operations under the hardware lock.
241
242Returns 0 on success and an appropriate error code otherwise (most
243notably -EBUSY if the hwspinlock was already taken).
244The function will never sleep.
245
246::
247
248  int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
249
250Attempt to lock a previously-assigned hwspinlock, but immediately fail if
251it is already taken.
252
253This function shall be called only from an atomic context.
254
255Returns 0 on success and an appropriate error code otherwise (most
256notably -EBUSY if the hwspinlock was already taken).
257The function will never sleep.
258
259::
260
261  void hwspin_unlock(struct hwspinlock *hwlock);
262
263Unlock a previously-locked hwspinlock. Always succeed, and can be called
264from any context (the function never sleeps).
265
266.. note::
267
268  code should **never** unlock an hwspinlock which is already unlocked
269  (there is no protection against this).
270
271::
272
273  void hwspin_unlock_irq(struct hwspinlock *hwlock);
274
275Unlock a previously-locked hwspinlock and enable local interrupts.
276The caller should **never** unlock an hwspinlock which is already unlocked.
277
278Doing so is considered a bug (there is no protection against this).
279Upon a successful return from this function, preemption and local
280interrupts are enabled. This function will never sleep.
281
282::
283
284  void
285  hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
286
287Unlock a previously-locked hwspinlock.
288
289The caller should **never** unlock an hwspinlock which is already unlocked.
290Doing so is considered a bug (there is no protection against this).
291Upon a successful return from this function, preemption is reenabled,
292and the state of the local interrupts is restored to the state saved at
293the given flags. This function will never sleep.
294
295::
296
297  void hwspin_unlock_raw(struct hwspinlock *hwlock);
298
299Unlock a previously-locked hwspinlock.
300
301The caller should **never** unlock an hwspinlock which is already unlocked.
302Doing so is considered a bug (there is no protection against this).
303This function will never sleep.
304
305::
306
307  void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
308
309Unlock a previously-locked hwspinlock.
310
311The caller should **never** unlock an hwspinlock which is already unlocked.
312Doing so is considered a bug (there is no protection against this).
313This function will never sleep.
314
315::
316
317  int hwspin_lock_get_id(struct hwspinlock *hwlock);
318
319Retrieve id number of a given hwspinlock. This is needed when an
320hwspinlock is dynamically assigned: before it can be used to achieve
321mutual exclusion with a remote cpu, the id number should be communicated
322to the remote task with which we want to synchronize.
323
324Returns the hwspinlock id number, or -EINVAL if hwlock is null.
325
326Typical usage
327=============
328
329::
330
331	#include <linux/hwspinlock.h>
332	#include <linux/err.h>
333
334	int hwspinlock_example1(void)
335	{
336		struct hwspinlock *hwlock;
337		int ret;
338
339		/* dynamically assign a hwspinlock */
340		hwlock = hwspin_lock_request();
341		if (!hwlock)
342			...
343
344		id = hwspin_lock_get_id(hwlock);
345		/* probably need to communicate id to a remote processor now */
346
347		/* take the lock, spin for 1 sec if it's already taken */
348		ret = hwspin_lock_timeout(hwlock, 1000);
349		if (ret)
350			...
351
352		/*
353		* we took the lock, do our thing now, but do NOT sleep
354		*/
355
356		/* release the lock */
357		hwspin_unlock(hwlock);
358
359		/* free the lock */
360		ret = hwspin_lock_free(hwlock);
361		if (ret)
362			...
363
364		return ret;
365	}
366
367	int hwspinlock_example2(void)
368	{
369		struct hwspinlock *hwlock;
370		int ret;
371
372		/*
373		* assign a specific hwspinlock id - this should be called early
374		* by board init code.
375		*/
376		hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
377		if (!hwlock)
378			...
379
380		/* try to take it, but don't spin on it */
381		ret = hwspin_trylock(hwlock);
382		if (!ret) {
383			pr_info("lock is already taken\n");
384			return -EBUSY;
385		}
386
387		/*
388		* we took the lock, do our thing now, but do NOT sleep
389		*/
390
391		/* release the lock */
392		hwspin_unlock(hwlock);
393
394		/* free the lock */
395		ret = hwspin_lock_free(hwlock);
396		if (ret)
397			...
398
399		return ret;
400	}
401
402
403API for implementors
404====================
405
406::
407
408  int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
409		const struct hwspinlock_ops *ops, int base_id, int num_locks);
410
411To be called from the underlying platform-specific implementation, in
412order to register a new hwspinlock device (which is usually a bank of
413numerous locks). Should be called from a process context (this function
414might sleep).
415
416Returns 0 on success, or appropriate error code on failure.
417
418::
419
420  int hwspin_lock_unregister(struct hwspinlock_device *bank);
421
422To be called from the underlying vendor-specific implementation, in order
423to unregister an hwspinlock device (which is usually a bank of numerous
424locks).
425
426Should be called from a process context (this function might sleep).
427
428Returns the address of hwspinlock on success, or NULL on error (e.g.
429if the hwspinlock is still in use).
430
431Important structs
432=================
433
434struct hwspinlock_device is a device which usually contains a bank
435of hardware locks. It is registered by the underlying hwspinlock
436implementation using the hwspin_lock_register() API.
437
438::
439
440	/**
441	* struct hwspinlock_device - a device which usually spans numerous hwspinlocks
442	* @dev: underlying device, will be used to invoke runtime PM api
443	* @ops: platform-specific hwspinlock handlers
444	* @base_id: id index of the first lock in this device
445	* @num_locks: number of locks in this device
446	* @lock: dynamically allocated array of 'struct hwspinlock'
447	*/
448	struct hwspinlock_device {
449		struct device *dev;
450		const struct hwspinlock_ops *ops;
451		int base_id;
452		int num_locks;
453		struct hwspinlock lock[0];
454	};
455
456struct hwspinlock_device contains an array of hwspinlock structs, each
457of which represents a single hardware lock::
458
459	/**
460	* struct hwspinlock - this struct represents a single hwspinlock instance
461	* @bank: the hwspinlock_device structure which owns this lock
462	* @lock: initialized and used by hwspinlock core
463	* @priv: private data, owned by the underlying platform-specific hwspinlock drv
464	*/
465	struct hwspinlock {
466		struct hwspinlock_device *bank;
467		spinlock_t lock;
468		void *priv;
469	};
470
471When registering a bank of locks, the hwspinlock driver only needs to
472set the priv members of the locks. The rest of the members are set and
473initialized by the hwspinlock core itself.
474
475Implementation callbacks
476========================
477
478There are three possible callbacks defined in 'struct hwspinlock_ops'::
479
480	struct hwspinlock_ops {
481		int (*trylock)(struct hwspinlock *lock);
482		void (*unlock)(struct hwspinlock *lock);
483		void (*relax)(struct hwspinlock *lock);
484	};
485
486The first two callbacks are mandatory:
487
488The ->trylock() callback should make a single attempt to take the lock, and
489return 0 on failure and 1 on success. This callback may **not** sleep.
490
491The ->unlock() callback releases the lock. It always succeed, and it, too,
492may **not** sleep.
493
494The ->relax() callback is optional. It is called by hwspinlock core while
495spinning on a lock, and can be used by the underlying implementation to force
496a delay between two successive invocations of ->trylock(). It may **not** sleep.
497