会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • System and method for implementing NUMA-aware reader-writer locks
    • 用于实现NUMA感知读写器锁的系统和方法
    • US08966491B2
    • 2015-02-24
    • US13458868
    • 2012-04-27
    • Irina CalciuDavid DiceVictor M. LuchangcoVirendra J. MaratheNir N. ShavitYosef Lev
    • Irina CalciuDavid DiceVictor M. LuchangcoVirendra J. MaratheNir N. ShavitYosef Lev
    • G06F9/46
    • G06F9/526G06F2209/523
    • NUMA-aware reader-writer locks may leverage lock cohorting techniques to band together writer requests from a single NUMA node. The locks may relax the order in which the lock schedules the execution of critical sections of code by reader threads and writer threads, allowing lock ownership to remain resident on a single NUMA node for long periods, while also taking advantage of parallelism between reader threads. Threads may contend on node-level structures to get permission to acquire a globally shared reader-writer lock. Writer threads may follow a lock cohorting strategy of passing ownership of the lock in write mode from one thread to a cohort writer thread without releasing the shared lock, while reader threads from multiple NUMA nodes may simultaneously acquire the shared lock in read mode. The reader-writer lock may follow a writer-preference policy, a reader-preference policy or a hybrid policy.
    • NUMA感知的读写器锁可以利用锁定队列技术将来自单个NUMA节点的写入器请求带到一起。 锁可以放松锁定通过读取器线程和写入器线程调度关键代码段的顺序,允许锁定所有权长时间保持驻留在单个NUMA节点上,同时还利用读取器线程之间的并行性。 线程可能会争取节点级结构获得获取全局共享读写器锁的权限。 编写者线程可能遵循锁定队列策略,将锁定的所有权从写入模式从一个线程传递到队列写入器线程,而不会释放共享锁定,而来自多个NUMA节点的读取器线程可以同时在读取模式下获取共享锁定。 读写器锁可以遵循写入者偏好策略,读者偏好策略或混合策略。
    • 2. 发明授权
    • System and method for enabling turbo mode in a processor
    • 用于在处理器中启用turbo模式的系统和方法
    • US08775837B2
    • 2014-07-08
    • US13213833
    • 2011-08-19
    • David DiceNir N. ShavitVirendra J. Marathe
    • David DiceNir N. ShavitVirendra J. Marathe
    • G06F1/32G06F1/00G06F15/00G06F9/46
    • G06F9/526G06F1/3228G06F1/324G06F9/485Y02D10/126
    • The systems and methods described herein may enable a processor core to run at higher speeds than other processor cores in the same package. A thread executing on one processor core may begin waiting for another thread to complete a particular action (e.g., to release a lock). In response to determining that other threads are waiting, the thread/core may enter an inactive state. A data structure may store information indicating which threads are waiting on which other threads. In response to determining that a quorum of threads/cores are in an inactive state, one of the threads/cores may enter a turbo mode in which it executes at a higher speed than the baseline speed for the cores. A thread holding a lock and executing in turbo mode may perform work delegated by waiting threads at the higher speed. A thread may exit the inactive state when the waited-for action is completed.
    • 本文描述的系统和方法可以使处理器核心以比同一封装中的其它处理器核心更高的速度运行。 在一个处理器核心上执行的线程可以开始等待另一个线程来完成特定动作(例如,释放锁定)。 响应于确定其他线程正在等待,线程/内核可能进入非活动状态。 数据结构可以存储指示哪些线程在哪个其他线程上等待的信息。 响应于确定线程/核心的法定数量处于非活动状态,线程/内核中的一个可以进入turbo模式,在该模式下,该模式以比核心的基线速度更高的速度执行。 持有锁并以turbo模式执行的线程可以执行以较高速度等待线程委托的工作。 等待操作完成时,线程可能会退出非活动状态。
    • 3. 发明授权
    • System and method for NUMA-aware locking using lock cohorts
    • 使用锁定队列进行NUMA感知锁定的系统和方法
    • US08694706B2
    • 2014-04-08
    • US13458871
    • 2012-04-27
    • David DiceVirendra J. MaratheNir N. Shavit
    • David DiceVirendra J. MaratheNir N. Shavit
    • G06F12/00
    • G06F9/526
    • The system and methods described herein may be used to implement NUMA-aware locks that employ lock cohorting. These lock cohorting techniques may reduce the rate of lock migration by relaxing the order in which the lock schedules the execution of critical code sections by various threads, allowing lock ownership to remain resident on a single NUMA node longer than under strict FIFO ordering, thus reducing coherence traffic and improving aggregate performance. A NUMA-aware cohort lock may include a global shared lock that is thread-oblivious, and multiple node-level locks that provide cohort detection. The lock may be constructed from non-NUMA-aware components (e.g., spin-locks or queue locks) that are modified to provide thread-obliviousness and/or cohort detection. Lock ownership may be passed from one thread that holds the lock to another thread executing on the same NUMA node without releasing the global shared lock.
    • 本文描述的系统和方法可以用于实现采用锁定队列的NUMA感知锁。 这些锁定队列技术可以通过放松锁定通过各种线程调度关键代码段的执行顺序来降低锁定迁移速率,从而允许锁定所有权保持驻留在单个NUMA节点上比在严格的FIFO排序之前更长,从而减少 一致性流量和提高总体性能。 NUMA感知的群组锁可能包括线程忽略的全局共享锁和提供队列检测的多个节点级锁。 锁可以由修改为提供线程忽略性和/或队列检测的非NUMA感知组件(例如,旋转锁或队列锁)构建。 锁定所有权可以从保存锁的一个线程传递到在同一NUMA节点上执行的另一个线程,而不会释放全局共享锁。
    • 4. 发明授权
    • System and method for tracking references to shared objects using byte-addressable per-thread reference counters
    • 用于使用字节可寻址的每线程引用计数器来跟踪对共享对象的引用的系统和方法
    • US08677076B2
    • 2014-03-18
    • US12750455
    • 2010-03-30
    • David DiceNir N. Shavit
    • David DiceNir N. Shavit
    • G06F12/00G06F13/00G06F13/28
    • G06F12/0261
    • The system described herein may track references to a shared object by concurrently executing threads using a reference tracking data structure that includes an owner field and an array of byte-addressable per-thread entries, each including a per-thread reference counter and a per-thread counter lock. Slotted threads assigned to a given array entry may increment or decrement the per-thread reference counter in that entry in response to referencing or dereferencing the shared object. Unslotted threads may increment or decrement a shared unslotted reference counter. A thread may update the data structure and/or examine it to determine whether the number of references to the shared object is zero or non-zero using a blocking-optimistic or a non-blocking mechanism. A checking thread may acquire ownership of the data structure, obtain an instantaneous snapshot of all counters, and return a value indicating whether the number of references to the shared object is zero or non-zero.
    • 本文描述的系统可以通过使用包括所有者字段和字节可寻址每个线程条目的数组的参考跟踪数据结构并行执行线程来跟踪对共享对象的引用,每个线程项包括每线程参考计数器和每线程参考计数器, 螺纹计数器锁。 分配给给定阵列条目的时隙线程可以增加或减少该条目中的每个线程引用计数器,以响应引用或取消引用共享对象。 未分配的线程可以递增或递减共享的未引用的引用计数器。 线程可以更新数据结构和/或检查它,以使用阻塞乐观或非阻塞机制来确定对共享对象的引用数量是零还是非零。 检查线程可以获取数据结构的所有权,获得所有计数器的瞬时快照,并返回一个值,该值指示对共享对象的引用数是零还是非零。
    • 5. 发明申请
    • System and Method for Implementing NUMA-Aware Reader-Writer Locks
    • 实现NUMA感知读写器锁的系统和方法
    • US20130290967A1
    • 2013-10-31
    • US13458868
    • 2012-04-27
    • Irina CalciuDavid DiceVictor M. LuchangcoVirendra J. MaratheNir N. ShavitYosef Lev
    • Irina CalciuDavid DiceVictor M. LuchangcoVirendra J. MaratheNir N. ShavitYosef Lev
    • G06F9/46
    • G06F9/526G06F2209/523
    • NUMA-aware reader-writer locks may leverage lock cohorting techniques to band together writer requests from a single NUMA node. The locks may relax the order in which the lock schedules the execution of critical sections of code by reader threads and writer threads, allowing lock ownership to remain resident on a single NUMA node for long periods, while also taking advantage of parallelism between reader threads. Threads may contend on node-level structures to get permission to acquire a globally shared reader-writer lock. Writer threads may follow a lock cohorting strategy of passing ownership of the lock in write mode from one thread to a cohort writer thread without releasing the shared lock, while reader threads from multiple NUMA nodes may simultaneously acquire the shared lock in read mode. The reader-writer lock may follow a writer-preference policy, a reader-preference policy or a hybrid policy.
    • NUMA感知的读写器锁可以利用锁定队列技术将来自单个NUMA节点的写入器请求带到一起。 锁可以放松锁定通过读取器线程和写入器线程调度关键代码段的顺序,允许锁定所有权长时间保持驻留在单个NUMA节点上,同时还利用读取器线程之间的并行性。 线程可能会争取节点级结构获得获取全局共享读写器锁的权限。 编写者线程可能遵循锁定队列策略,将锁定的所有权从写入模式从一个线程传递到队列写入器线程,而不会释放共享锁定,而来自多个NUMA节点的读取器线程可以同时在读取模式下获取共享锁定。 读写器锁可以遵循写入者偏好策略,读者偏好策略或混合策略。
    • 6. 发明授权
    • System and method for implementing hierarchical queue-based locks using flat combining
    • 使用平面组合实现基于层次化的队列锁的系统和方法
    • US08458721B2
    • 2013-06-04
    • US13152079
    • 2011-06-02
    • Virendra J. MaratheNir N. ShavitDavid Dice
    • Virendra J. MaratheNir N. ShavitDavid Dice
    • G06F9/46
    • G06F9/526
    • The system and methods described herein may be used to implement a scalable, hierarchal, queue-based lock using flat combining. A thread executing on a processor core in a cluster of cores that share a memory may post a request to acquire a shared lock in a node of a publication list for the cluster using a non-atomic operation. A combiner thread may build an ordered (logical) local request queue that includes its own node and nodes of other threads (in the cluster) that include lock requests. The combiner thread may splice the local request queue into a (logical) global request queue for the shared lock as a sub-queue. A thread whose request has been posted in a node that has been combined into a local sub-queue and spliced into the global request queue may spin on a lock ownership indicator in its node until it is granted the shared lock.
    • 本文描述的系统和方法可以用于使用平坦组合来实现可扩展的,分级的基于队列的锁。 在共享内存的核心集群中的处理器核心上执行的线程可以使用非原子操作来发布用于获取集群的发布列表的节点中的共享锁定的请求。 组合线程可以构建一个有序(逻辑)本地请求队列,其包括其自己的节点和包含锁定请求的其他线程(在集群中)的节点。 组合器线程可以将本地请求队列拼接成用于共享锁的(逻辑)全局请求队列作为子队列。 已经将其请求已经发布在已经组合到本地子队列中并被拼接到全局请求队列中的节点的线程可以旋转其节点中的所有权所有者指示符,直到被授予共享锁为止。
    • 7. 发明授权
    • Efficient implicit privatization of transactional memory
    • 事务记忆的有效隐含私有化
    • US08332374B2
    • 2012-12-11
    • US12101316
    • 2008-04-11
    • Yosef LevNir N. ShavitDavid DiceMark A. Moir
    • Yosef LevNir N. ShavitDavid DiceMark A. Moir
    • G06F7/00G06F17/00G06F13/14
    • G06F9/466G06F9/526
    • Apparatus, methods, and program products are disclosed that provide a technology that implicitly isolates a portion of a transactional memory that is shared between multiple threads for exclusive use by an isolating thread without the possibility of other transactions modifying the isolated portion of the transactional memory. In some of the described embodiments read locations of a shared memory are covered by a first set of lock objects, and write locations are covered by a second set of lock objects, each lock object in each set having a reader mode and a writer mode. Some of these embodiments acquiring each lock object in the first set using the reader mode, and acquire each lock object in the second set using the writer mode. These embodiments store result data values at write locations in the shared memory subsequent to the acquiring said first and second set of lock objects.
    • 公开了装置,方法和程序产品,其提供隐含地隔离在多个线程之间共享的事务存储器的一部分以供隔离线程独占使用的技术,而不会有其他事务修改事务存储器的隔离部分的可能性。 在一些描述的实施例中,共享存储器的读取位置由第一组锁定对象覆盖,并且写入位置由第二组锁定对象覆盖,每个组中的每个锁定对象具有读取器模式和写入器模式。 这些实施例中的一些实施例使用读取器模式来获取第一组中的每个锁定对象,并且使用写入器模式来获取第二组中的每个锁定对象。 这些实施例在获取所述第一和第二组锁定对象之后将结果数据值存储在共享存储器中的写入位置处。
    • 8. 发明授权
    • Computer system and method for leasing memory location to allow predictable access to memory location
    • 用于租赁存储器位置的计算机系统和方法,以允许对存储器位置的可预测访问
    • US08219762B1
    • 2012-07-10
    • US10918062
    • 2004-08-13
    • Nir N. ShavitOri Shalev
    • Nir N. ShavitOri Shalev
    • G06F12/00G06F13/00
    • G06F12/0815
    • A synchronization technique for shared-memory multiprocessor systems involves acquiring exclusive ownership of a requested memory location for a predetermined, limited duration of time. If an “owning” process is unpredictably delayed, the ownership of the requested memory location expires after the predetermined duration of time, thereby making the memory location accessible to other processes and requiring the previous “owning” process to retry its operations on the memory location. If the “owning” process completes its operations on the memory location during the predetermination duration of time, the ownership of the memory location by the “owning” process is terminated and the memory location becomes accessible to other processes.
    • 用于共享存储器多处理器系统的同步技术涉及在预定的有限持续时间内获取所请求的存储器位置的独占所有权。 如果“拥有”过程不可预测地延迟,则所请求的存储器位置的所有权在预定的持续时间之后到期,从而使得存储器位置可被其他进程访问,并且要求先前的“拥有”进程在其存储器位置重试其操作 。 如果“拥有”过程在时间的预定持续时间期间在存储器位置上完成其操作,则终止“拥有”过程对存储器位置的所有权,并且存储器位置对于其他进程可访问。
    • 9. 发明授权
    • Address level log-based synchronization of shared data
    • 地址级基于日志的共享数据同步
    • US08037476B1
    • 2011-10-11
    • US11263439
    • 2005-10-31
    • Nir N. ShavitOri Shalev
    • Nir N. ShavitOri Shalev
    • G06F9/46G06F7/38G06F7/00G06F13/00
    • G06F9/526G06F2209/522G06F2209/523
    • A method of address-level log-based synchronization comprises a thread attempting to acquire a lock on an object. If its lock attempt fails, a thread logs, at a synchronization log, data access operations directed at the shared data object, and waits for a notification from the lock-owning thread indicating whether the logged operations succeeded. If its lock attempt succeeds, the lock-owning thread performs data access operations on the shared data object, and arbitrates among requests logged by other threads in the synchronization log, applying the modifications logged in the requests that do not conflict with other modification operations, and rejecting the requests that conflict. The master sends a success notification to the logging threads whose requests were accepted, and a failure notification to the logging threads whose requests were rejected.
    • 地址级基于日志的同步的方法包括尝试获取对象上的锁的线程。 如果其锁尝试失败,则线程在同步日志中记录针对共享数据对象的数据访问操作,并等待来自锁拥有线程的通知,指示所记录的操作是否成功。 如果锁定尝试成功,则锁拥有线程对共享数据对象执行数据访问操作,并在同步日志中的其他线程记录的请求之间进行仲裁,应用不与其他修改操作冲突的请求中记录的修改, 并拒绝发生冲突的请求。 主机向接受请求的日志记录线程发送成功通知,以及对请求被拒绝的日志记录线程的失败通知。
    • 10. 发明授权
    • Method and apparatus for emulating linked-load/store-conditional synchronization
    • 用于模拟链接加载/存储条件同步的方法和装置
    • US07870344B2
    • 2011-01-11
    • US11864649
    • 2007-09-28
    • Nir N. ShavitMark S. MoirVictor M. Luchangco
    • Nir N. ShavitMark S. MoirVictor M. Luchangco
    • G06F12/00G06F13/00
    • G06F12/0844G06F9/30G06F9/3004G06F9/38G06F9/466G06F9/52G06F12/00G06F12/08G06F12/0815G06F12/084G06F12/0842G06F12/0846Y10S707/99942
    • The design of nonblocking linked data structures using single-location synchronization primitives such as compare-and-swap (CAS) is a complex affair that often requires severe restrictions on the way pointers are used. One way to address this problem is to provide stronger synchronization operations, for example, ones that atomically modify one memory location while simultaneously verifying the contents of others. We provide a simple and highly efficient nonblocking implementation of such an operation: an atomic k-word-compare single-swap operation (KCSS). Our implementation is obstruction-free. As a result, it is highly efficient in the uncontended case and relies on contention management mechanisms in the contended cases. It allows linked data structure manipulation without the complexity and restrictions of other solutions. Additionally, as a building block of some implementations of our techniques, we have developed the first nonblocking software implementation of load-linked/store-conditional that does not severely restrict word size.
    • 使用单一位置同步原语(例如比较和交换(CAS))的非阻塞链接数据结构的设计是一种复杂的事情,通常需要对指针使用方式的严格限制。 解决这个问题的一个方法是提供更强大的同步操作,例如,在同时验证其他内容的同时原子地修改一个存储器位置的同步操作。 我们提供了这样一个操作的简单而高效的非阻塞实现:原子k字比较单交换操作(KCSS)。 我们的实施是无障碍的。 因此,在无争议的情况下是高效的,并且依赖于竞争案件中的争用管理机制。 它允许链接的数据结构操作,而不需要其他解决方案的复杂性和限制。 此外,作为我们技术的一些实现的构建块,我们开发了第一个不会严重限制字大小的无负载连接/存储条件的非阻塞软件实现。