会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 64. 发明授权
    • L2 cache controller with slice directory and unified cache structure
    • L2缓存控制器具有片目录和统一缓存结构
    • US07490200B2
    • 2009-02-10
    • US11054924
    • 2005-02-10
    • Leo James ClarkJames Stephen Fields, Jr.Guy Lynn GuthrieWilliam John Starke
    • Leo James ClarkJames Stephen Fields, Jr.Guy Lynn GuthrieWilliam John Starke
    • G06F12/08
    • G06F12/0851G06F12/0811
    • A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry. The cache array may be arranged with rows and columns of cache sectors wherein a given cache line is spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array outputs different sectors of the given cache line in successive clock cycles based on the latency of a given sector.
    • 高速缓存存储器将具有单个访问/命令端口的高速缓存阵列逻辑地分割成至少两个切片,并且使用第一高速缓存目录访问第一高速缓存阵列切片,同时使用第二高速缓存目录来访问第二高速缓存阵列切片,但是访问 从缓存目录中使用单个缓存仲裁器来管理单个访问/命令端口。 在说明性实施例中,每个高速缓存目录具有其自己的目录仲裁器来处理冲突的内部请求,并且目录仲裁器与缓存仲裁器通信。 与处理器核心相关联的地址标签被从处理器核心以指定的位发送,指定的位将地址标签与只有一个高速缓存阵列片相关联,其相应的目录确定地址标签是否与当前有效的高速缓存条目匹配。 高速缓存阵列可以布置有高速缓存扇区的行和列,其中给定的高速缓存行分布在不同行和列中的扇区之间,其中给定高速缓存行的至少一部分位于具有第一等待时间的第一列和另一个 给定高速缓存行的一部分位于具有大于第一等待时间的第二等待时间的第二列中。 缓存阵列基于给定扇区的等待时间在连续的时钟周期中输出给定高速缓存行的不同扇区。
    • 65. 发明授权
    • Method and apparatus for allocating data usages within an embedded dynamic random access memory device
    • 用于在嵌入式动态随机存取存储器件内分配数据用途的方法和装置
    • US06678814B2
    • 2004-01-13
    • US09895225
    • 2001-06-29
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. ReddyWilliam John Starke
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. ReddyWilliam John Starke
    • G06F1202
    • G06F12/0223G06F9/5016
    • An apparatus for allocating data usage in an embedded dynamic random access memory (DRAM) device is disclosed. The apparatus for allocating data usages within an embedded dynamic random access memory (DRAM) device comprises a control analysis circuit, a data/command flow circuit, and a partition management control. The control analysis circuit generates an allocation signal in response to processing performances of a processor. Coupled to an embedded DRAM device, the data/command flow circuit controls data flow from the processor to the embedded DRAM device. The partition management control, coupled to the control analysis circuit, partitions the embedded DRAM device into a first partition and a second partition. The data stored in the first partition are different from the data stored in the second partition according to their respective usage. The allocation percentages of the first and second partitions are dynamically allocated by the allocation signal from the control analysis circuit.
    • 公开了一种用于在嵌入式动态随机存取存储器(DRAM)装置中分配数据使用的装置。 用于在嵌入式动态随机存取存储器(DRAM)装置内分配数据用途的装置包括控制分析电路,数据/命令流电路和分区管理控制。 控制分析电路根据处理器的处理性能生成分配信号。 耦合到嵌入式DRAM设备,数据/命令流程电路控制从处理器到嵌入式DRAM设备的数据流。 耦合到控制分析电路的分区管理控制将嵌入式DRAM设备分割成第一分区和第二分区。 存储在第一分区中的数据根据​​它们各自的用途而不同于存储在第二分区中的数据。 通过来自控制分析电路的分配信号动态分配第一和第二分区的分配百分比。
    • 66. 发明授权
    • Sequencing data on a shared data bus via a memory buffer to prevent data overlap during multiple memory read operations
    • 通过存储器缓冲器对共享数据总线上的数据进行排序,以防止在多个存储器读取操作期间的数据重叠
    • US06622222B2
    • 2003-09-16
    • US09843071
    • 2001-04-26
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Warren Edward Maule
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Warren Edward Maule
    • G06F1300
    • G06F13/161
    • Disclosed is a method and memory subsystem that allows for speculative issuance of reads to a DRAM array to provide efficient utilization of the data out bus and faster read response for accesses to a single DRAM array. Two read requests are issued simultaneously to a first and second DRAM in the memory subsystem, respectively. Data issued from the first DRAM is immediately placed on the data out bus, while data issued from the second DRAM is held in an associated buffer. The processor or memory controller then generates a release signal if the second read is not speculative or is correctly speculated. The release signal is sent to the second DRAM after the first issued data is placed on the bus. The release signal releases the data held in the buffer associated with the second DRAM from the buffer to the data out bus. Because the data has already been issued when the release signal is received, no loss of time is incurred in issuing the data from the DRAM and only a small clock cycle delay occurs between the first issued data and the second issued data on the data out bus.
    • 公开了一种方法和存储器子系统,其允许对DRAM阵列的读取的推测性发布以提供数据输出总线的有效利用和对单个DRAM阵列的访问的更快的读取响应。 两个读取请求分别同时发送到存储器子系统中的第一和第二DRAM。 从第一DRAM发出的数据立即被放置在数据输出总线上,而从第二DRAM发出的数据保持在相关的缓冲器中。 如果第二次读取不是推测性的或者被正确推测,则处理器或存储器控制器然后产生释放信号。 在第一个发布的数据放在总线上之后,释放信号被发送到第二个DRAM。 释放信号将保存在与第二DRAM相关联的缓冲器中的数据从缓冲器释放到数据输出总线。 由于在接收到释放信号时已经发出数据,所以在从DRAM发出数据时不会发生时间损失,并且在数据总线上的第一次发布的数据和第二个发出的数据之间只发生小的时钟周期延迟 。
    • 68. 发明授权
    • Method and apparatus for concurrently communicating with multiple embedded dynamic random access memory devices
    • 用于与多个嵌入式动态随机存取存储器件同时通信的方法和装置
    • US06574719B2
    • 2003-06-03
    • US09903720
    • 2001-07-12
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. ReddyWilliam John Starke
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. ReddyWilliam John Starke
    • G06F1200
    • G06F13/28G06F13/4243
    • An apparatus for providing concurrent communications between multiple memory devices and a processor is disclosed. Each of the memory device includes a driver, a phase/cycle adjust sensing circuit, and a bus alignment communication logic. Each phase/cycle adjust sensing circuit detects an occurrence of a cycle adjustment from a corresponding driver within a memory device. If an occurrence of a cycle adjustment has been detected, the bus alignment communication logic communicates the occurrence of a cycle adjustment to the processor. The bus alignment communication logic also communicates the occurrence of a cycle adjustment to the bus alignment communication logic in the other memory devices. There are multiple receivers within the processor, and each of the receivers is designed to receive data from a respective driver in a memory device. Each of the receivers includes a cycle delay block. The receiver that had received the occurrence of a cycle adjustment informs the other receivers that did not receive the occurrence of a cycle adjustment to use their cycle delay block to delay the incoming data for at least one cycle.
    • 公开了一种用于在多个存储器件和处理器之间提供并发通信的装置。 每个存储器件包括驱动器,相位/周期调整感测电路和总线对准通信逻辑。 每个相位/周期调整感测电路检测来自存储器件内相应的驱动器的周期调整的发生。 如果已经检测到循环调整的发生,则总线对准通信逻辑将处理器的循环调整的发生传达给处理器。 总线对准通信逻辑还将循环调整的发生与其他存储器件中的总线对准通信逻辑进行通信。 处理器内有多个接收器,并且每个接收器被设计成从存储器设备中的相应驱动器接收数据。 每个接收器包括循环延迟块。 接收到发生循环调整的接收器通知其他接收机没有接收周期调整的发生,以使用它们的周期延迟块来延迟输入数据至少一个周期。