会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 177. 发明授权
    • L2 cache controller with slice directory and unified cache structure
    • L2缓存控制器具有片目录和统一缓存结构
    • US08001330B2
    • 2011-08-16
    • US12325266
    • 2008-12-01
    • Leo James ClarkJames Stephen Fields, Jr.Guy Lynn GuthrieWilliam John Starke
    • Leo James ClarkJames Stephen Fields, Jr.Guy Lynn GuthrieWilliam John Starke
    • G06F13/00
    • G06F12/0851G06F12/0811
    • A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency.
    • 缓存存储器将具有单个访问/命令端口的高速缓存阵列逻辑地分区成至少两个切片,并且使用第一目录来访问第一阵列片,同时使用第二目录来访问第二阵列片,但是从高速缓存目录 使用控制单个访问/命令端口的单个缓存仲裁器进行管理。 在一个实施例中,每个高速缓存目录具有其自己的目录仲裁器来处理冲突的内部请求,并且目录仲裁器与高速缓存仲裁器通信。 高速缓存阵列布置有高速缓存扇区的行和列,其中高速缓存行分布在不同行和列中的扇区之间,其中一部分给定高速缓存行位于具有第一延迟的第一列中,并且给定的另一部分 高速缓存线位于具有大于第一等待时间的第二等待时间的第二列中。
    • 180. 发明授权
    • System bus structure for large L2 cache array topology with different latency domains
    • 具有不同延迟域的大二级缓存阵列拓扑的系统总线结构
    • US07469318B2
    • 2008-12-23
    • US11054925
    • 2005-02-10
    • Vicente Enrique ChungGuy Lynn GuthrieWilliam John StarkeJeffrey Adam Stuecheli
    • Vicente Enrique ChungGuy Lynn GuthrieWilliam John StarkeJeffrey Adam Stuecheli
    • G06F12/00
    • G06F12/0811G06F12/0831G06F12/0851Y02D10/13
    • A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set. Two address busses (one for each data bus set) are used to receive successive address tags that identify which portions of the requested memory values are being received from each data bus set. For example, the requested memory values may be 32 bytes each, and the separate portions of the requested memory values are received over four successive cycles with an 8-byte portion of each value received each cycle. The cache lines are spread across different cache sectors of the cache memory, wherein the cache sectors have different output latencies, and the separate portions of a given requested memory value are loaded sequentially into the corresponding cache sectors based on their respective output latencies. Merge flow circuits responsive to the cache controller are used to receive the portions of a requested memory value and input those bytes into the cache sector.
    • 一种高速缓冲存储器,其通过在连续时钟周期的第一时间间隔内从第一数据总线接收第一请求存储器值的分开的部分来将两个存储器值加载到两个高速缓存行中,并且从第二数据接收第二请求存储器值的分离部分 总线与第一时间跨度重叠的连续时钟周期的第二时间跨度。 在说明性实施例中,第一输入线用于加载第一高速缓存行的第一字节数组和第二高速缓存行的第一字节数组,第二输入行用于加载第一高速缓存的第二字节数组 线和第二高速缓存线的第二字节阵列,并且第一和第二存储器值的分离部分的传输在第一和第二数据总线之间交错。 第一数据总线可以是第一数据总线组中的多个数据总线之一,并且第二数据总线可以是第二数据总线组中的多个数据总线中的一个。 两个地址总线(每个数据总线集合一个)用于接收连续的地址标签,其识别从每个数据总线组接收到所请求的存储器值的哪些部分。 例如,所请求的存储器值可以是每个32个字节,并且所请求的存储器值的分开的部分在四个连续周期中被接收,每个周期接收每个值的8字节部分。 高速缓存行分布在高速缓冲存储器的不同高速缓存扇区上,其中高速缓存扇区具有不同的输出延迟,并且给定请求的存储器值的分离部分基于它们各自的输出延迟顺序地加载到相应的高速缓存扇区中。 响应于高速缓存控制器的合并流回路用于接收请求的存储器值的部分并将这些字节输入高速缓存扇区。