Intel(R) Threading Building Blocks Doxygen Documentation
version 4.2.3
|
Go to the documentation of this file.
17 #ifndef __TBB_queuing_mutex_H
18 #define __TBB_queuing_mutex_H
20 #define __TBB_queuing_mutex_H_include_area
36 #if TBB_USE_THREADING_TOOLS
111 #undef __TBB_queuing_mutex_H_include_area
void __TBB_EXPORTED_METHOD internal_construct()
static const bool is_rw_mutex
void __TBB_EXPORTED_METHOD release()
Release lock.
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Base class for types that should not be copied or assigned.
Queuing mutex with local-only spinning.
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.
void poison_pointer(T *__TBB_atomic &)
static const bool is_fair_mutex
~scoped_lock()
Release lock (if lock is held).
queuing_mutex()
Construct unacquired mutex.
void __TBB_EXPORTED_METHOD acquire(queuing_mutex &m)
Acquire lock on given mutex.
bool __TBB_EXPORTED_METHOD try_acquire(queuing_mutex &m)
Acquire lock on given mutex if free (i.e. non-blocking)
The scoped locking pattern.
scoped_lock()
Construct lock that has not acquired a mutex.
static const bool is_recursive_mutex
scoped_lock(queuing_mutex &m)
Acquire lock on given mutex.
uintptr_t going
The local spin-wait variable.
#define __TBB_DEFINE_PROFILING_SET_NAME(sync_object_type)
#define __TBB_EXPORTED_METHOD
void initialize()
Initialize fields to mean "no lock held".
scoped_lock * next
The pointer to the next competitor for a mutex.
Copyright © 2005-2020 Intel Corporation. All Rights Reserved.
Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are
registered trademarks or trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
* Other names and brands may be claimed as the property of others.