会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 62. 发明申请
    • Data processing system, cache system and method for reducing imprecise invalid coherency states
    • 数据处理系统,缓存系统和减少不精确无效一致性状态的方法
    • US20070204110A1
    • 2007-08-30
    • US11364774
    • 2006-02-28
    • Guy GuthrieWilliam StarkeDerek Williams
    • Guy GuthrieWilliam StarkeDerek Williams
    • G06F13/28
    • G06F12/0831G06F12/0813
    • A cache coherent data processing system includes at least first and second coherency domains. In a first cache memory within the first coherency domain of the data processing system, a coherency state field associated with a storage location and an address tag is set to a first data-invalid coherency state that indicates that the address tag is valid and that the storage location does not contain valid data. In response to snooping a data-invalid state update request, the first cache memory updates the coherency state field from the first data-invalid coherency state to a second data-invalid coherency state that indicates that the address tag is valid, that the storage location does not contain valid data, and that a memory block associated with the address tag is likely cached within the first coherency domain.
    • 缓存相干数据处理系统至少包括第一和第二相干域。 在数据处理系统的第一相关域内的第一高速缓冲存储器中,将与存储位置和地址标签相关联的一致性状态字段设置为指示地址标签有效的第一数据无效一致性状态, 存储位置不包含有效数据。 响应于窥探数据无效状态更新请求,第一缓存存储器将相关性状态字段从第一数据无效一致性状态更新为指示地址标签有效的第二数据无效一致性状态,存储位置 不包含有效数据,并且与地址标签相关联的存储器块可能被缓存在第一个相干域内。
    • 63. 发明申请
    • Data processing system, cache system and method for handling a flush operation in a data processing system having multiple coherency domains
    • 数据处理系统,缓存系统和用于处理具有多个相干域的数据处理系统中的刷新操作的方法
    • US20070180196A1
    • 2007-08-02
    • US11342951
    • 2006-01-30
    • Guy GuthrieJohn HollawayWilliam StarkeDerek Williams
    • Guy GuthrieJohn HollawayWilliam StarkeDerek Williams
    • G06F13/28
    • G06F12/0822G06F12/0804G06F12/0831
    • A cache coherent data processing system includes at least first and second coherency domains. The first coherency domain contains a memory controller, an associated system memory having a target memory block identified by a target address, and a domain indicator indicating whether the target memory block is cached outside the first coherency domain. During operation, the first coherency domain receives a flush operation broadcast to the first and second coherency domains, where the flush operation specifies the target address of the target memory block. The first coherency domain also receives a combined response for the flush operation representing a system-wide response to the flush operation. In response to receipt in the first coherency domain of the combined response, a determination is made if the combined response indicates that a cached copy of the target memory block may remain within the data processing system. In response to a determination that the combined response indicates that a cached copy of the target memory block may remain in the data processing system, the domain indicator is updated to indicate that the target memory block is cached outside of the first coherency domain.
    • 缓存相干数据处理系统至少包括第一和第二相干域。 第一相干域包括存储器控制器,具有由目标地址标识的目标存储器块的相关系统存储器,以及指示目标存储器块是否被高速缓存在第一相干域之外的域指示符。 在操作期间,第一相干域接收向第一和第二相干域广播的刷新操作,其中刷新操作指定目标存储器块的目标地址。 第一个相干域还接收表示对刷新操作的系统范围响应的刷新操作的组合响应。 响应于在组合响应的第一相关域中的接收,确定组合响应是否指示目标存储器块的高速缓存副本可能保留在数据处理系统内。 响应于组合响应指示目标存储器块的高速缓存副本可能保留在数据处理系统中的确定,更新域指示符以指示目标存储器块被高速缓存在第一相干域之外。
    • 64. 发明申请
    • Data processing system, method and interconnect fabric having a flow governor
    • 具有流量调节器的数据处理系统,方法和互连结构
    • US20060187958A1
    • 2006-08-24
    • US11055399
    • 2005-02-10
    • Leo ClarkGuy GuthrieWilliam Starke
    • Leo ClarkGuy GuthrieWilliam Starke
    • H04J3/22H04J3/16
    • H04L47/722H04L47/70H04L49/109
    • A data processing system includes a plurality of local hubs each coupled to a remote hub by a respective one a plurality of point-to-point communication links. Each of the plurality of local hubs queues requests for access to memory blocks for transmission on a respective one of the point-to-point communication links to a shared resource in the remote hub. Each of the plurality of local hubs transmits requests to the remote hub utilizing only a fractional portion of a bandwidth of its respective point-to-point communication link. The fractional portion that is utilized is determined by an allocation policy based at least in part upon a number of the plurality of local hubs and a number of processing units represented by each of the plurality of local hubs. The allocation policy prevents overruns of the shared resource.
    • 数据处理系统包括多个本地集线器,每个集线器通过相应的一个多个点对点通信链路耦合到远程集线器。 多个本地集线器中的每一个排队对存储器块进行访问的请求,用于在到远程集线器中的共享资源的点对点通信链路中的相应一个上传输。 多个本地集线器中的每一个仅利用其相应点对点通信链路的带宽的小数部分向远程集线器发送请求。 所使用的分数部分由至少部分地基于多个本地集线器的数量和由多个本地集线器中的每一个表示的多个处理单元的分配策略确定。 分配策略可以防止超出共享资源。
    • 65. 发明申请
    • L2 cache array topology for large cache with different latency domains
    • 具有不同延迟域的大型缓存的L2缓存阵列拓扑
    • US20060179223A1
    • 2006-08-10
    • US11054930
    • 2005-02-10
    • Leo ClarkGuy GuthrieKirk LivingstonWilliam Starke
    • Leo ClarkGuy GuthrieKirk LivingstonWilliam Starke
    • G06F12/00
    • G06F12/0897G06F12/0822G06F12/0824
    • A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency. One set of wires oriented along a horizontal direction may be used to output the cache line, while another set of wires oriented along a vertical direction may be used for maintenance of the cache sectors. A given cache line is further preferably spread across sectors in different rows or cache ways. For example, a cache line can be 128 bytes and spread across four sectors in four different columns, each sector containing 32 bytes of the cache line, and the cache line is output over four successive clock cycles with one sector being transmitted during each of the four cycles.
    • 缓存存储器逻辑地将高速缓存行与高速缓存阵列的至少两个缓存扇区相关联,其中不同扇区具有不同的输出延迟,并且对于负载命中,基于它们的等待时间来选择性地启用高速缓存扇区以在连续的时钟周期上输出高速缓存行 。 优选使用具有较高传输速度的较大导线来输出与所请求的存储块相对应的高速缓存行。 在说明性实施例中,高速缓存器配置有高速缓存扇区的行和列,并且给定的高速缓存行分布在不同列中的扇区之间,其中给定高速缓存行的至少一部分位于具有第一等待时间的第一列中 并且所述给定高速缓存行的另一部分位于具有大于所述第一等待时间的第二等待时间的第二列中。 可以使用沿水平方向定向的一组线来输出高速缓存线,而沿着垂直方向定向的另一组线可以用于高速缓存扇区的维护。 给定的高速缓存行进一步优选地分布在不同行或高速缓存方式的扇区之间。 例如,高速缓存行可以是128字节并且分布在四个不同列中的四个扇区上,每个扇区包含32个字节的高速缓存行,并且高速缓存行在四个连续的时钟周期内被输出,在每个 四个周期。
    • 67. 发明授权
    • 1-bit token ring arbitration architecture
    • 1位令牌环仲裁架构
    • US5388223A
    • 1995-02-07
    • US755474
    • 1991-09-05
    • Guy GuthrieJeffery L. Swarts
    • Guy GuthrieJeffery L. Swarts
    • G06F13/37H04L12/433G06F13/36
    • G06F13/37H04L12/433
    • A 1-bit token ring arbitration architecture where a plurality of chips which require access to a shared bus are coupled together in a ring is described. Each chip receives an arbitration in signal from the preceding member of the ring which is used to receive the token. Each chip transmits an arbitration out signal to the following member of the ring to send the token to the following member. In the preferred embodiment, the token appears as a 1 cycle active low pulse. An error signal notifies all the bus participants that a ring error has been detected. Preferably, the number of cycles the error signal is held active, the more severe the error. A request of bus (ROB) signal notifies the chip holding the token that another bus member needs to use the bus. The ROB signal allows the current holder of the token to maintain control of the bus if it has further processing on the bus as long as no other bus member needs the bus. A Token Hold Timer may be included in a ring member which defines how long the member can hold on to the token after receiving notification on the ROB line that another bus participant wants the bus.
    • 描述了需要访问共享总线的多个芯片以环形耦合在一起的1位令牌环仲裁架构。 每个芯片从用于接收令牌的环的前一成员的信号中接收仲裁。 每个芯片向环的下一个成员发送仲裁输出信号,以将令牌发送到下一个成员。 在优选实施例中,令牌显示为1个周期的有效低电平脉冲。 错误信号通知所有总线参与者已检测到环路错误。 优选地,误差信号保持有效的周期数,误差越严重。 总线(ROB)信号的请求通知保存令牌的芯片,另一个总线成员需要使用总线。 只要没有其他总线构件需要总线,ROB信号允许令牌的当前持有者在总线上进行进一步的处理来维持总线的控制。 令牌保持定时器可以包括在环成员中,该环成员定义了在ROB线上接收到另一个总线参与者想要总线的通知之后,成员可以持续多长时间。
    • 69. 发明申请
    • L2 CACHE ARRAY TOPOLOGY FOR LARGE CACHE WITH DIFFERENT LATENCY DOMAINS
    • L2缓存高速缓存的区别于不同的域名
    • US20080077740A1
    • 2008-03-27
    • US11947742
    • 2007-11-29
    • Leo ClarkGuy GuthrieKirk LivingstonWilliam Starke
    • Leo ClarkGuy GuthrieKirk LivingstonWilliam Starke
    • G06F12/08
    • G06F12/0897G06F12/0822G06F12/0824
    • A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency. One set of wires oriented along a horizontal direction may be used to output the cache line, while another set of wires oriented along a vertical direction may be used for maintenance of the cache sectors. A given cache line is further preferably spread across sectors in different rows or cache ways. For example, a cache line can be 128 bytes and spread across four sectors in four different columns, each sector containing 32 bytes of the cache line, and the cache line is output over four successive clock cycles with one sector being transmitted during each of the four cycles.
    • 缓存存储器逻辑地将高速缓存行与高速缓存阵列的至少两个缓存扇区相关联,其中不同扇区具有不同的输出延迟,并且对于负载命中,基于它们的等待时间来选择性地启用高速缓存扇区以在连续的时钟周期上输出高速缓存行 。 优选使用具有较高传输速度的较大导线来输出与所请求的存储块相对应的高速缓存行。 在说明性实施例中,高速缓存器配置有高速缓存扇区的行和列,并且给定的高速缓存行分布在不同列中的扇区之间,其中给定高速缓存行的至少一部分位于具有第一等待时间的第一列中 并且所述给定高速缓存行的另一部分位于具有大于所述第一等待时间的第二等待时间的第二列中。 可以使用沿水平方向定向的一组线来输出高速缓存线,而沿着垂直方向定向的另一组线可以用于高速缓存扇区的维护。 给定的高速缓存行进一步优选地分布在不同行或高速缓存方式的扇区之间。 例如,高速缓存行可以是128字节并且分布在四个不同列中的四个扇区上,每个扇区包含32个字节的高速缓存行,并且高速缓存行在四个连续的时钟周期内被输出,在每个 四个周期。
    • 70. 发明申请
    • Processor, data processing system, and method for initializing a memory block in a data processing system having multiple coherency domains
    • 处理器,数据处理系统和用于初始化具有多个相干域的数据处理系统中的存储器块的方法
    • US20070226423A1
    • 2007-09-27
    • US11388001
    • 2006-03-23
    • Ravi ArimilliGuy GuthrieWilliam StarkeDerek Williams
    • Ravi ArimilliGuy GuthrieWilliam StarkeDerek Williams
    • G06F13/28
    • G06F12/0822G06F12/084
    • A data processing system includes at least first and second coherency domains, each including at least one processor core and a memory. In response to an initialization operation by a processor core that indicates a target memory block to be initialized, a cache memory in the first coherency domain determines a coherency state of the target memory block with respect to the cache memory. In response to the determination, the cache memory selects a scope of broadcast of an initialization request identifying the target memory block. A narrower scope including the first coherency domain and excluding the second coherency domain is selected in response to a determination of a first coherency state, and a broader scope including the first coherency domain and the second coherency domain is selected in response to a determination of a second coherency state. The cache memory then broadcasts an initialization request with the selected scope. In response to the initialization request, the target memory block is initialized within a memory of the data processing system to an initialization value.
    • 数据处理系统至少包括第一和第二相干域,每个域包括至少一个处理器核和存储器。 响应于指示要初始化的目标存储器块的处理器核心的初始化操作,第一相干域中的高速缓存存储器确定目标存储器块相对于高速缓冲存储器的一致性状态。 响应于该确定,高速缓存存储器选择识别目标存储器块的初始化请求的广播范围。 响应于第一相关性状态的确定而选择包括第一相关域并且排除第二相关性域的较窄范围,并且响应于确定第一相关性域的第一相关性域和第二相关域 第二一致性状态。 然后,高速缓冲存储器播放具有所选范围的初始化请求。 响应于初始化请求,将目标存储器块在数据处理系统的存储器内初始化为初始化值。