会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 43. 发明申请
    • Store stream prefetching in a microprocessor
    • 在微处理器中存储流预取
    • US20060179238A1
    • 2006-08-10
    • US11054871
    • 2005-02-10
    • John GriswellHung LeFrancis O'ConnellWilliam StarkeJeffrey StuecheliAlbert Williams
    • John GriswellHung LeFrancis O'ConnellWilliam StarkeJeffrey StuecheliAlbert Williams
    • G06F13/28
    • G06F12/0862G06F9/30043G06F9/30047G06F9/383G06F12/0811
    • In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
    • 在具有加载/存储单元和预取硬件的微处理器中,预取硬件包括预取队列,其包含指示分配的数据流的条目。 预取引擎接收与由加载/存储单元执行的存储指令相关联的地址。 预取引擎通过将队列中的条目与包含多个高速缓存块的地址的窗口进行比较来确定是否对与存储指令相对应的预取队列中的条目进行分配,其中地址窗口从接收到的地址导出。 预取引擎将预取队列中的条目与两个连续高速缓存块的窗口进行比较。 当预取队列中的任何条目都在地址窗口内时,预取引擎抑制新条目的分配。 当存储指令的数据地址等于地址窗口的边界区域中的地址时,预取引擎进一步抑制新条目的分配。
    • 44. 发明申请
    • System bus structure for large L2 cache array topology with different latency domains
    • 具有不同延迟域的大二级缓存阵列拓扑的系统总线结构
    • US20060179222A1
    • 2006-08-10
    • US11054925
    • 2005-02-10
    • Vicente ChungGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • Vicente ChungGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • G06F12/00
    • G06F12/0811G06F12/0831G06F12/0851Y02D10/13
    • A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set. Two address busses (one for each data bus set) are used to receive successive address tags that identify which portions of the requested memory values are being received from each data bus set. For example, the requested memory values may be 32 bytes each, and the separate portions of the requested memory values are received over four successive cycles with an 8-byte portion of each value received each cycle. The cache lines are spread across different cache sectors of the cache memory, wherein the cache sectors have different output latencies, and the separate portions of a given requested memory value are loaded sequentially into the corresponding cache sectors based on their respective output latencies. Merge flow circuits responsive to the cache controller are used to receive the portions of a requested memory value and input those bytes into the cache sector.
    • 一种高速缓冲存储器,其通过在连续时钟周期的第一时间间隔内从第一数据总线接收第一请求存储器值的分开的部分来将两个存储器值加载到两个高速缓存行中,并且从第二数据接收第二请求存储器值的分离部分 总线与第一时间跨度重叠的连续时钟周期的第二时间跨度。 在说明性实施例中,第一输入线用于加载第一高速缓存行的第一字节数组和第二高速缓存行的第一字节数组,第二输入行用于加载第一高速缓存的第二字节数组 线和第二高速缓存线的第二字节阵列,并且第一和第二存储器值的分离部分的传输在第一和第二数据总线之间交错。 第一数据总线可以是第一数据总线组中的多个数据总线之一,并且第二数据总线可以是第二数据总线组中的多个数据总线中的一个。 两个地址总线(每个数据总线集合一个)用于接收连续的地址标签,其识别从每个数据总线组接收到所请求的存储器值的哪些部分。 例如,所请求的存储器值可以是每个32个字节,并且所请求的存储器值的分开的部分在四个连续周期中被接收,每个周期接收每个值的8字节部分。 高速缓存行分布在高速缓冲存储器的不同高速缓存扇区上,其中高速缓存扇区具有不同的输出延迟,并且给定请求的存储器值的分离部分基于它们各自的输出延迟顺序地加载到相应的高速缓存扇区中。 响应于高速缓存控制器的合并流回路用于接收请求的存储器值的部分并将这些字节输入高速缓存扇区。
    • 46. 发明申请
    • Pseudo random test pattern generation using Markov chains
    • 使用马尔可夫链的伪随机测试模式生成
    • US20050108605A1
    • 2005-05-19
    • US09737347
    • 2000-12-15
    • Jeffrey Stuecheli
    • Jeffrey Stuecheli
    • G01R31/28G01R31/3183G06F17/50
    • G01R31/318357G01R31/318307G01R31/318385G06F17/5022
    • A driver module is provided that generates test patterns with desired tendencies. The driver module provides these test patterns to controlling code for simulation of a hardware model. The test patterns are generated by creating and connecting subgraphs in a Markov chain. The Markov model describes a plurality of states, each having a probability of going to at least one other state. Markov models may be created to determine whether to drive an interface in the hardware model and to determine the command to drive through the interface. Once the driver module creates and connects the subgraphs of the Markov models, the driver module initiates a random walk through the Markov chains and provides the commands to the controlling code.
    • 提供了一个驱动程序模块,可以生成具有所需趋势的测试模式。 驱动程序模块提供这些测试模式来控制硬件模型仿真的代码。 测试模式是通过在马尔可夫链中创建和连接子图来生成的。 马尔可夫模型描述了多个状态,每个状态具有进入至少一个其他状态的概率。 可以创建马尔可夫模型来确定是否驱动硬件模型中的接口,并确定通过接口驱动的命令。 一旦驱动程序模块创建并连接马尔科夫模型的子图,驱动程序模块将通过马尔可夫链启动随机游走,并向控制代码提供命令。