会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • MULTIPROCESSING CIRCUIT WITH CACHE CIRCUITS THAT ALLOW WRITING TO NOT PREVIOUSLY LOADED CACHE LINES
    • 具有缓存电路的多路由电路,允许不要先前加载缓存线
    • WO2009130671A8
    • 2010-02-04
    • PCT/IB2009051649
    • 2009-04-22
    • NXP BVHOOGERBRUGGE JANANDREI SERGEEVICH TERECHKO
    • HOOGERBRUGGE JANANDREI SERGEEVICH TERECHKO
    • G06F12/08
    • G06F12/0822
    • Data is processed using a first and second processing circuit (12) coupled to a background memory (10) via a first and second cache circuit (14, 14') respectively. Each cache circuit (14, 14') stores cache lines, state information defining states of the stored cache lines, and flag information for respective addressable locations within at least one stored cache line. The cache control circuit of the first cache circuit (14) is configured to selectively set the flag information for part of the addressable locations within the at least one stored cache line to a valid state when the first processing circuit (12) writes data to said part of the locations, without prior loading of the at least one stored cache line from the background memory (10). Data is copied from the at least one cache line into the second cache circuit (14') from the first cache circuit (14) in combination with the flag information for the locations within the at least one cache line. A cache miss signal is generated both in response to access commands addressing locations in cache lines that are not stored in the cache memory and in response to a read command addressing a location within the at least one cache line that is stored in the memory (140), when the flag information is not set.
    • 使用分别经由第一和第二高速缓存电路(14,14')耦合到背景存储器(10)的第一和第二处理电路(12)来处理数据。 每个高速缓存电路(14,14')存储高速缓存行,定义所存储的高速缓存行的状态的状态信息以及用于在至少一个存储的高速缓存行内的各个可寻址位置的标志信息。 第一高速缓存电路(14)的高速缓存控制电路被配置为当第一处理电路(12)向所述第一高速缓存电路(14)写入数据时,将所述至少一个存储的高速缓存行中的可寻址位置的一部分的标志信息选择性地设置为有效状态 没有从背景存储器(10)预先加载至少一个存储的高速缓存行的一部分位置。 结合用于至少一个高速缓存行中的位置的标志信息,从第一高速缓存电路(14)将数据从至少一个高速缓存行复制到第二高速缓存电路(14')中。 响应于对高速缓存行中未存储在高速缓存存储器中的位置的访问命令进行寻址并且响应于读取存储在存储器(140)中的至少一个高速缓存线内的位置的读取命令,生成高速缓存未命中信号 ),当标志信息未设置时。
    • 3. 发明申请
    • METHOD AND APPARATUS FOR CENTRALIZED SNOOP FILTERING
    • 用于中心SNOOP过滤的方法和装置
    • WO02017102A2
    • 2002-02-28
    • PCT/US2001/025061
    • 2001-08-10
    • G06F12/08G06F15/16
    • G06F12/0831G06F12/0822G06F12/0828G06F2212/507
    • An example embodiment of a computer system utilizing a central snoop filter includes several nodes coupled together via a switching device. Each of the nodes may include several processors and caches as well as a block of system memory. All traffic from one node to another takes place through the switching device. The switching device includes a snoop filter that tracks cache line coherency information for all caches in the computer system. The snoop filter has enough entries to track the tags and state information for all entries in all cashes in all of the system's nodes. In addition to the tag and state information, the snoop filter stores information indicating which of the nodes has a copy of each cache line. The snoop filter serves in part to keep snoop transactions from being performed at nodes that do not contain a copy of the subject cache line, thereby reducing system overhead, reducing traffic across the system interconnect busses, and reducing the amount of time required to perform snoop transactions.
    • 利用中央窥探滤波器的计算机系统的示例性实施例包括经由交换设备耦合在一起的多个节点。 每个节点可以包括几个处理器和高速缓存以及系统存储器块。 从一个节点到另一个节点的所有业务通过交换设备进行。 交换设备包括一个窥探过滤器,其跟踪计算机系统中所有高速缓存的高速缓存行一致性信息。 窥探过滤器具有足够的条目来跟踪所有系统节点中所有存储中的所有条目的标签和状态信息。 除了标签和状态信息之外,窥探过滤器存储指示哪个节点具有每个高速缓存行的副本的信息。 窥探过滤器部分地用于在不包含主体高速缓存行的副本的节点处执行窥探事务,从而减少系统开销,减少跨系统互连总线的流量,并减少执行窥探所需的时间量 交易。
    • 4. 发明申请
    • HIGH-AVAILABILITY SUPER SERVER
    • 高可用性超级服务器
    • WO1997030399A1
    • 1997-08-21
    • PCT/US1997002571
    • 1997-02-19
    • INTERGRAPH CORPORATION
    • INTERGRAPH CORPORATIONMcKINNEY, Arthur, C.McCARVER, Charles, H., Jr.SAMIEE, Vahid
    • G06F15/16
    • G06F12/0813G06F12/0822G06F12/0831G06F13/4022G06F13/4077G06F15/173H03K19/00361H03K19/018521
    • The present invention provides a high-availability parallel processing server that is a multi-processor computer with a segmented memory architecture. The processors are grouped into processor clusters, with each cluster consisting of up to four processors in a preferred embodiment. Each cluster of processors has dedicated memory buses for communicating with each of the memory segments. The invention is designed to be able to maintain coherent interaction between all processors and memory segments within a preferred embodiment. A preferred embodiment uses Intel Pentium-Pro processors. The present invention comprises a plurality of processor segments (a cluster of one or more CPU's) memory segments (separate regions of memory), and memory communication buses (pathways to communicate with the memory segment). Each processor segment has a dedicated communication bus for interacting with each memory segment, allowing different processors parallel access to different memory segments while working in parallel. The processors, in preferred embodiment, may further include an internal cache and flags associated with the cache to allow multi-processor cache coherency in external write-back cache.
    • 本发明提供了一种具有分段存储架构的多处理器计算机的高可用性并行处理服务器。 处理器被分组为处理器集群,在优选实施例中,每个集群由多达四个处理器组成。 每个处理器集群具有用于与每个存储器段通信的专用存储器总线。 本发明被设计为能够在优选实施例中保持所有处理器和存储器段之间的相干交互。 优选实施例使用Intel Pentium-Pro处理器。 本发明包括多个处理器段(一个或多个CPU的集群)存储器段(存储器的独立区域)和存储器通信总线(与存储器段通信的路径)。 每个处理器段具有用于与每个存储器段进行交互的专用通信总线,允许不同处理器并行工作时并行访问不同的存储器段。 在优选实施例中,处理器还可以包括与高速缓存相关联的内部高速缓存和标志,以允许外部回写高速缓存中的多处理器高速缓存一致性。
    • 5. 发明申请
    • METHOD AND SYSTEM FOR ORDERING I/O ACCESS IN A MULTI-NODE ENVIRONMENT
    • 用于在多节点环境中订购I / O访问的方法和系统
    • WO2015134103A1
    • 2015-09-11
    • PCT/US2014/072816
    • 2014-12-30
    • CAVIUM, INC.
    • KESSLER, Richard, E.
    • G06F13/20G06F5/06
    • G06F13/423G06F12/0822G06F12/0833G06F13/10G06F13/20
    • According to at least one example embodiment, a multi-chip system includes multiple chip devices configured to communicate to each other and share resources, such as I/O devices. According to at least one example embodiment, a method of synchronizing access to an input/output (I/O) device in the multi-chip system comprises initiating, by a first agent of the multi-chip system, a first operation for accessing the I/O device, the first operation is queued, prior to execution by the I/O device, in a queue. Once the first operation is queued, an indication of such queuing is provided. Upon detecting, by a second agent of the multi-chip system, the indication of queuing the first operation in the queue, initiating a second operation to access the I/O device, the second operation is queued subsequent to the first operation in the queue.
    • 根据至少一个示例性实施例,多芯片系统包括被配置为彼此通信并共享诸如I / O设备之类的资源的多个芯片设备。 根据至少一个示例性实施例,一种使对多芯片系统中的输入/输出(I / O)设备的访问同步的方法包括由多芯片系统的第一代理启动用于访问 I / O设备,第一个操作在I / O设备执行之前排队等待队列。 一旦第一操作排队,就提供这种排队的指示。 在由多芯片系统的第二代理检测到排队队列中的第一操作的指示时,启动访问I / O设备的第二操作,第二操作在队列中的第一操作之后排队 。
    • 9. 发明申请
    • FORWARD STATE FOR USE IN CACHE COHERENCY IN A MULTIPROCESSOR SYSTEM
    • 用于多处理器系统中的高速缓存中的前向状态
    • WO2004061678A3
    • 2005-02-03
    • PCT/US0338347
    • 2003-12-03
    • INTEL CORP
    • HUM HERBERTGOODMAN JAMES A
    • G06F12/08
    • G06F12/0822G06F12/0813G06F12/0831
    • Described herein is a cache coherency protocol having five states: Modified, Exclusive, Shared, Invalid and Forward (MESIF). The MESIF cache coherency protocol includes a Forward (F) state that designates a single copy of data from which further copies can be made. A cache line in the F state is used to respond to request for a copy of the cache line. In one embodiment, the newly created copy is placed in the F state and the cache line previously in the F state is put in the Shared (S) state, or the Invalid (I) state. Thus, if the cache line is shared, one shared copy is in the F state and the remaining copies of the cache line are in the S state.
    • 这里描述了具有五种状态的高速缓存一致性协议:修改,独占,共享,无效和转发(MESIF)。 MESIF高速缓存一致性协议包括转发(F)状态,其指定可以进行进一步复制的数据的单个副本。 处于F状态的高速缓存行用于响应对高速缓存行的副本的请求。 在一个实施例中,新创建的副本被置于F状态,并且先前处于F状态的高速缓存行被置于共享(S)状态或无效(I)状态。 因此,如果高速缓存行被共享,则一个共享副本处于F状态,并且高速缓存行的剩余副本处于S状态。
    • 10. 发明申请
    • USING AN L2 DIRECTORY TO FACILITATE SPECULATIVE LOADS IN A MULTIPROCESSOR SYSTEM
    • 使用L2目录在多处理器系统中促进投入负荷
    • WO03001383A2
    • 2003-01-03
    • PCT/US0222159
    • 2002-06-26
    • SUN MICROSYSTEMS INC
    • CHAUDHRY SHAILENDERTREMBLAY MARC
    • G06F9/38G06F12/08G06F12/00
    • G06F9/3842G06F9/3834G06F9/3861G06F12/0811G06F12/0822G06F12/0828
    • One embodiment of the present invention provides a system that facilitates speculative load operations in a multiprocessor system. This system operates by maintaining a record at an L2 cache of speculative load operations that have returned data values through the L2 cache to associated L1 caches, wherein a speculative load operation is a load operation that is speculatively initiated before a preceding load operation has returned. In response to receiving an invalidation event, the system invalidates a target line in the L2 cache. The system also performs a lookup in the record to identify affected L1 caches that are associated with speculative load operations that may be affected by the invalidation of the target line in the L2 cache. Next, the system sends replay commands to the affected L1 caches in order to replay the affected speculative load operations, so that the affected speculative load operations take place after invalidation of the target line in the L2 cache.
    • 本发明的一个实施例提供了一种便于多处理器系统中的推测加载操作的系统。 该系统通过在L2高速缓存上维持已经通过L2高速缓存向相关联的L1高速缓存返回数据值的推测加载操作的记录来进行操作,其中推测加载操作是在前一加载操作已经返回之前推测性地启动的加载操作。 为了响应接收到失效事件,系统使L2缓存中的目标线无效。 系统还在记录中执行查找以识别与可能受L2高速缓存中目标行无效影响的推测加载操作相关联的受影响的L1高速缓存。 接下来,系统将重播命令发送到受影响的L1缓存,以便重放受影响的推测性加载操作,以便受影响的推测性加载操作发生在L2缓存中的目标行失效之后。