会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 23. 发明授权
    • Method and system for a processor to gain assured ownership of an up-to-date copy of data
    • 一种处理器的方法和系统,以获得对数据的最新副本的有保证的所有权
    • US06636948B2
    • 2003-10-21
    • US09834551
    • 2001-04-13
    • Simon C. Steely, Jr.Stephen R. Van DorenMadhu Sharna
    • Simon C. Steely, Jr.Stephen R. Van DorenMadhu Sharna
    • G06F1200
    • G06F12/0817
    • A performance enhancing change-to-dirty operation (CTD) is disclosed wherein contention among several processors trying to gain ownership of a block of data is obviated by arranging the CTD to always succeed. A method and a system are disclosed where a processor in a multiprocessor system having a copy of data gains assured ownership of data that the processor may then write. The method provides for the possibilities of conditions that may exist and provides a scenario that the requesting processor may have to wait for the ownership. Conditions are handled where the memory is the “owner” of the data and where other processor are requesting ownership, and where copies of the data exist at other processors. The method provides for messages to other processor having copies of the data informing them that the data is now invalid.
    • 公开了一种性能提升的改变到脏操作(CTD),其中通过使CTD总是成功地排除尝试获得数据块所有权的若干处理器中的争用。 公开了一种方法和系统,其中具有数据副本的多处理器系统中的处理器确保处理器然后可以写入的数据的所有权。 该方法提供可能存在的条件的可能性并提供请求处理器可能必须等待所有权的情况。 在内存是数据的“所有者”以及其他处理器请求所有权的情况下以及数据的副本存在于其他处理器的情况下处理条件。 该方法向具有数据副本的其他处理器提供消息,通知他们数据现在无效。
    • 25. 发明授权
    • High-performance non-blocking switch with multiple channel ordering constraints
    • 具有多通道排序限制的高性能非阻塞开关
    • US06249520B1
    • 2001-06-19
    • US08957664
    • 1997-10-24
    • Simon C. Steely, Jr.Stephen R. VanDorenMadhumitra SharmaCraig D. KeeferDavid W. Davis
    • Simon C. Steely, Jr.Stephen R. VanDorenMadhumitra SharmaCraig D. KeeferDavid W. Davis
    • H04L1250
    • G06F13/4022G06F12/0826G06F15/17393
    • An architecture and coherency protocol for use in a large SMP computer system includes a hierarchical switch structure which allows for a number of multi-processor nodes to be coupled to the switch to operate at an optimum performance. Within each multi-processor node, a simultaneous buffering system is provided that allows all of the processors of the multi-processor node to operate at peak performance. A memory is shared among the nodes, with a portion of the memory resident at each of the multi-processor nodes. Each of the multi-processor nodes includes a number of elements for maintaining memory coherency, including a victim cache, a directory and a transaction tracking table. The victim cache allows for selective updates of victim data destined for memory stored at a remote multi-processing node, thereby improving the overall performance of memory. Memory performance is additionally improved by including, at each memory, a delayed write buffer which is used in conjunction with the directory to identify victims that are to be written to memory. An arb bus coupled to the output of the directory of each node provides a central ordering point for all messages that are transferred through the SMP. The messages comprise a number of transactions, and each transaction is assigned to a number of different virtual channels, depending upon the processing stage of the message. The use of virtual channels thus helps to maintain data coherency by providing a straightforward method for maintaining system order. Using the virtual channels and the directory structure, cache coherency problems that would previously result in deadlock may be avoided.
    • 用于大SMP计算机系统的架构和一致性协议包括分层交换结构,其允许多个多处理器节点耦合到交换机以以最佳性能进行操作。 在每个多处理器节点内,提供同时缓冲系统,其允许多处理器节点的所有处理器以最高性能运行。 存储器在节点之间共享,存储器的一部分驻留在每个多处理器节点处。 每个多处理器节点包括用于维持存储器一致性的多个元件,包括受害缓存,目录和事务跟踪表。 受害者缓存允许选择性地更新目的地存储在远程多处理节点处的存储器的受害者数据,从而提高存储器的整体性能。 通过在每个存储器处包括延迟的写入缓冲器来进一步改善存储器性能,该缓冲器与目录一起使用以识别要写入存储器的受害者。 耦合到每个节点的目录的输出的arb总线为通过SMP传输的所有消息提供了中心排序点。 消息包括多个事务,并且根据消息的处理阶段,将每个事务分配给多个不同的虚拟通道。 因此,通过提供用于维护系统顺序的简单方法,使用虚拟通道有助于维持数据一致性。 使用虚拟通道和目录结构,可以避免先前导致死锁的高速缓存一致性问题。
    • 26. 发明授权
    • Victimization of clean data blocks
    • 干净数据块的受害
    • US06202126B1
    • 2001-03-13
    • US08957697
    • 1997-10-24
    • Stephen Van DorenSimon C. Steely, Jr.Madhumitra Sharma
    • Stephen Van DorenSimon C. Steely, Jr.Madhumitra Sharma
    • G06F1200
    • G06F12/0804G06F12/0833
    • A method for preventing inadvertent invalidation of data elements in a system having a separate probe queue and fill queue for each central processing unit, is provided wherein a central processing unit stores a clean data element, that would otherwise have been discarded, in a victim data buffer when it is evicted from cache. The central processing unit subsequently issues a clean-victim command to the system control logic when the readmiss or read-miss-modify command, targeting the data element that maps to the same location in cache as the clean data element, is issued. The clean-victim command causes the duplicate tag store to indicate that the clean data element is no longer stored in that central processing unit's cache. While the data is stored therein, the central processing unit cannot issue a probe message that targets that data until the victim data buffer has been deallocated. The central processing unit cannot modify the data element and therefore, if a probe invalidate has previously been issued for the clean version of the data element, it will not be able to inadvertently invalidate a modified version of the data element.
    • 提供了一种用于防止在具有用于每个中央处理单元的单独的探测队列和填充队列的系统中的数据元素的无意的无效的方法,其中中央处理单元将干脆的数据元素(否则将被丢弃)存储在受害者数据中 缓冲区从缓存中逐出。 当发出针对映射到与清洁数据元素在高速缓存中的相同位置的数据元素的readmiss或read-miss-modify命令时,中央处理单元随后向系统控制逻辑发出清理受害者命令。 clean-victim命令使重复的标签存储指示干净的数据元素不再存储在该中央处理器的缓存中。 当数据存储在其中时,中央处理单元不能发出针对该数据的探测消息,直到受害者数据缓冲器被释放。 中央处理单元不能修改数据元素,因此如果先前为数据元素的干净版本发出了探针无效,则它将无法无意中使数据元素的修改版本无效。
    • 30. 发明授权
    • Cache spill management techniques using cache spill prediction
    • 缓存溢出管理技术使用缓存溢出预测
    • US08407421B2
    • 2013-03-26
    • US12639214
    • 2009-12-16
    • Simon C. Steely, Jr.William C. HasenplaughAamer JaleelGeorge Z. Chrysos
    • Simon C. Steely, Jr.William C. HasenplaughAamer JaleelGeorge Z. Chrysos
    • G06F12/00
    • G06F12/0806G06F12/12
    • An apparatus and method is described herein for intelligently spilling cache lines. Usefulness of cache lines previously spilled from a source cache is learned, such that later evictions of useful cache lines from a source cache are intelligently selected for spill. Furthermore, another learning mechanism—cache spill prediction—may be implemented separately or in conjunction with usefulness prediction. The cache spill prediction is capable of learning the effectiveness of remote caches at holding spilled cache lines for the source cache. As a result, cache lines are capable of being intelligently selected for spill and intelligently distributed among remote caches based on the effectiveness of each remote cache in holding spilled cache lines for the source cache.
    • 这里描述了用于智能地溢出高速缓存行的装置和方法。 了解先前从源缓存溢出的高速缓存行的有用性,从而智能地选择来自源缓存的随后驱逐的溢出。 此外,另一种学习机制 - 缓存溢出预测 - 可以单独实施或结合有用性预测来实现。 高速缓存溢出预测能够学习在为源缓存保留溢出的高速缓存行时远程高速缓存的有效性。 因此,基于每个远程高速缓存在保存用于源高速缓存的溢出高速缓存行的有效性的情况下,高速缓存行能够被智能地选择为溢出并且智能地分布在远程高速缓存中。