会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 61. 发明申请
    • Retry Mechanism
    • 重试机制
    • US20090177846A1
    • 2009-07-09
    • US12408410
    • 2009-03-20
    • James B. KellerSridhar P. SubramanianRamesh Gunna
    • James B. KellerSridhar P. SubramanianRamesh Gunna
    • G06F12/00G06F3/00G06F12/08G06F13/00
    • G06F12/0831G06F13/362G06F13/4213Y02D10/13Y02D10/14Y02D10/151
    • An interface unit may comprise a buffer configured to store requests that are to be transmitted on an interconnect and a control unit coupled to the buffer. In one embodiment, the control unit is coupled to receive a retry response from the interconnect during a response phase of a first transaction for a first request stored in the buffer. The control unit is configured to record an identifier supplied on the interconnect with the retry response that identifies a second transaction that is in progress on the interconnect. The control unit is configured to inhibit reinitiation of the first transaction at least until detecting a second transmission of the identifier. In another embodiment, the control unit is configured to assert a retry response during a response phase of a first transaction responsive to a snoop hit of the first transaction on a first request stored in the buffer for which a second transaction is in progress on the interconnect. The control unit is further configured to provide an identifier of the second transaction with the retry response.
    • 接口单元可以包括被配置为存储要在互连上发送的请求的缓冲器和耦合到缓冲器的控制单元。 在一个实施例中,控制单元被耦合以在对于存储在缓冲器中的第一请求的第一事务的响应阶段期间从互连接收重试响应。 控制单元被配置为记录在互连上提供的标识符,该重试响应标识互连上正在进行的第二事务。 控制单元被配置为至少在检测到标识符的第二次传输之前禁止第一事务的重新发起。 在另一个实施例中,控制单元被配置为在第一事务的响应阶段响应第一事务的窥探命中在存储在第二事务在互连上的第二事务的缓冲器中的第一请求时断言重试响应 。 控制单元还被配置为提供具有重试响应的第二事务的标识符。
    • 63. 发明申请
    • Memory Controller with Loopback Test Interface
    • 带环回测试接口的内存控制器
    • US20080307276A1
    • 2008-12-11
    • US11760566
    • 2007-06-08
    • Luka BodrozicSukalpa BiswasHao ChenSridhar P. SubramanianJames B. Keller
    • Luka BodrozicSukalpa BiswasHao ChenSridhar P. SubramanianJames B. Keller
    • G11C29/04G06F12/00
    • G01R31/31716
    • In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation and, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect.
    • 在一个实施例中,一种装置包括互连; 耦合到所述互连的至少一个处理器; 以及耦合到所述互连的至少一个存储器控制器。 存储器控制器可由处理器编程为环回测试操作模式,并且在环回测试模式中,存储器控制器被配置为通过互连从处理器接收第一写入操作。 存储器控制器被配置为将来自第一写入操作的写入数据路由到连接到能够连接到一个或多个存储器模块的多个数据引脚的多个驱动器和接收器。 所述存储器控制器还被配置为将所述写入数据作为所述互连上的读取数据返回,用于从所述互连处从所述处理器接收的第一读取操作。
    • 64. 发明授权
    • Flexible probe/probe response routing for maintaining coherency
    • 灵活的探头/探头响应路由保持一致性
    • US07296122B2
    • 2007-11-13
    • US10628715
    • 2003-07-28
    • James B. KellerDale E. Gulick
    • James B. KellerDale E. Gulick
    • G06F12/14G06F12/16
    • G06F12/0815G06F12/0817
    • A computer system may include multiple processing nodes, one or more of which may be coupled to separate memories which may form a distributed memory system. The processing nodes may include caches, and the computer system may maintain coherency between the caches and the distributed memory system. Particularly, the computer system may implement a flexible probe command/response routing scheme. The scheme may employ an indication within the probe command which identifies a receiving node to receive the probe responses. For example, probe commands indicating that the target or the source of transaction should receive probe responses corresponding to the transaction may be included. Probe commands may specify the source of the transaction as the receiving node for read transactions (such that dirty data is delivered to the source node from the node storing the dirty data). On the other hand, for write transactions (in which data is being updated in memory at the target node of the transaction), the probe commands may specify the target of the transaction as the receiving node. In this manner, the target may determine when to commit the write data to memory and may receive any dirty data to be merged with the write data.
    • 计算机系统可以包括多个处理节点,其中的一个或多个处理节点可以耦合到可以形成分布式存储器系统的分离的存储器。 处理节点可以包括高速缓存,并且计算机系统可以保持高速缓存和分布式存储器系统之间的一致性。 特别地,计算机系统可以实现灵活的探测命令/响应路由方案。 该方案可以采用探测命令中的指示,该指示标识接收节点以接收探测响应。 例如,可以包括指示目标或事务源应该接收对应于事务的探测响应的探测命令。 探测命令可以将事务的来源指定为读取事务的接收节点(使得脏数据从存储脏数据的节点传送到源节点)。 另一方面,对于写入事务(其中数据正在事务的目标节点的存储器中更新),探测命令可以将事务的目标指定为接收节点。 以这种方式,目标可以确定何时将写入数据提交到存储器,并且可以接收要与写入数据合并的任何脏数据。
    • 65. 发明授权
    • Method and circuit for initializing a de-skewing buffer in a clock forwarded system
    • 在时钟转发系统中初始化去偏移缓冲区的方法和电路
    • US06952791B2
    • 2005-10-04
    • US10044549
    • 2002-01-11
    • James B. KellerDaniel W. Dobberpuhl
    • James B. KellerDaniel W. Dobberpuhl
    • G06F5/10H04L7/00H04L7/10G06F1/04
    • G06F5/10G06F2205/104H04L7/00H04L7/0008H04L7/005H04L7/10
    • A method and circuit for initializing a buffer in a clock forwarded system. A buffer is configured for temporarily storing incoming data received on the clock-forwarded interface. The buffer may use a write pointer and a read pointer which may be clocked by two different clocks allowing independent write and read accesses to the buffer. In an initialization mode, a predetermined pattern of data may be written into an entry in the buffer. In one embodiment, a logic circuit may detect the predetermined pattern of data and may cause the value of the write pointer to be captured. A synchronizing circuit may synchronize an indication that the predetermined pattern of data has been detected to the clock used by the read pointer. The synchronizer circuit may then provide a initialize signal to the read pointer which stores the captured write pointer value into the read pointer. This captured write pointer value becomes the initial value of the read pointer, effectively offsetting the read pointer from the write pointer.
    • 一种用于在时钟转发系统中初始化缓冲器的方法和电路。 缓冲区被配置为临时存储在时钟转发接口上接收的输入数据。 缓冲器可以使用写指针和读指针,读指针可以由两个不同的时钟计时,从而允许对缓冲器的独立的写和读访问。 在初始化模式中,可以将预定的数据模式写入缓冲器中的条目。 在一个实施例中,逻辑电路可以检测预定的数据模式,并且可以引起写指针的值被捕获。 同步电路可以使已经检测到预定数据模式的指示与读指针使用的时钟同步。 同步器电路然后可以向读指针提供初始化信号,该指针将捕获的写指针值存储到读指针中。 这个捕获的写指针值成为读指针的初始值,有效地将读指针从写指针中移除。
    • 66. 发明授权
    • Virtual channels and corresponding buffer allocations for deadlock-free computer system operation
    • 虚拟通道和相应的缓冲区分配,用于无死锁的计算机系统操作
    • US06938094B1
    • 2005-08-30
    • US09399281
    • 1999-09-17
    • James B. KellerDerrick R. Meyer
    • James B. KellerDerrick R. Meyer
    • G06F12/08G06F13/38G06F15/173G06F15/177H04L12/28
    • G06F15/17381
    • A computer system employs virtual channels and allocates different resources to the virtual channels. Packets which do not have logical/protocol-related conflicts are grouped into a virtual channel. Accordingly, logical conflicts occur between packets in separate virtual channels. The packets within a virtual channel may share resources (and hence experience resource conflicts), but the packets within different virtual channels may not share resources. Since packets which may experience resource conflicts do not experience logical conflicts, and since packets which may experience logical conflicts do not experience resource conflicts, deadlock-free operation may be achieved. Additionally, each virtual channel may be assigned control packet buffers and data packet buffers. Control packets may be substantially smaller in size, and may occur more frequently than data packets. By providing separate buffers, buffer space may be used efficiently. If a control packet which does not specify a data packet is received, no data packet buffer space is allocated. If a control packet which does specify a data packet is received, both control packet buffer space and data packet buffer space is allocated.
    • 计算机系统采用虚拟通道并为虚拟通道分配不同的资源。 没有逻辑/协议相关冲突的数据包被分组成虚拟通道。 因此,在分离的虚拟通道中的分组之间发生逻辑冲突。 虚拟通道内的数据包可能共享资源(从而遇到资源冲突),但不同虚拟通道内的数据包可能不共享资源。 由于可能遇到资源冲突的数据包不会发生逻辑冲突,并且由于可能遇到逻辑冲突的数据包不会遇到资源冲突,因此可能会实现无死锁操作。 另外,每个虚拟信道可以被分配控制分组缓冲器和数据分组缓冲器。 控制分组的大小可能会更小,并且可能比数据分组更频繁地发生。 通过提供单独的缓冲区,可以有效地使用缓冲区空间。 如果接收到不指定数据分组的控制分组,则不分配数据分组缓冲空间。 如果接收到指定数据分组的控制分组,则分配控制分组缓冲区空间和数据分组缓冲区空间。
    • 68. 发明授权
    • Conserving system memory bandwidth during a memory read operation in a multiprocessing computer system
    • 在多处理计算机系统中的存储器读取操作期间节省系统存储器带宽
    • US06728841B2
    • 2004-04-27
    • US10002753
    • 2001-10-31
    • James B. Keller
    • James B. Keller
    • G06F1300
    • G06F12/0813
    • A messaging scheme that conserves system memory bandwidth during a memory read operation in a multiprocessing computer system is described. A source processing node sends a memory read command to a target processing node to read data from a designated memory location in a system memory associated with the target processing node. The target node transmits a read response to the source node containing the requested data and also concurrently transmits a probe command to one or more of the remaining nodes in the multiprocessing computer system. In response to the probe command each remaining processing node checks whether the processing node has a cached copy of the requested data. If a processing node, other than the source and the target nodes, finds a modified cached copy of the designated memory location, that processing node responds with a memory cancel response sent to the target node and a read response sent to the source node. The read response contains the modified cache block containing the requested data, and the memory cancel response causes the target node to abort further processing of the memory read command, and to stop transmission of the read response, if the target node hasn't transmitted the read response yet. The memory cancel message thus attempts to avoid relatively lengthy and time-consuming system memory accesses when the system memory has a stale data.
    • 描述了在多处理计算机系统中的存储器读取操作期间节省系统存储器带宽的消息传递方案。 源处理节点向目标处理节点发送存储器读取命令,以从与目标处理节点相关联的系统存储器中的指定存储器位置读取数据。 目标节点向包含所请求数据的源节点发送读取响应,并且还向多处理计算机系统中的一个或多个剩余节点发送探测命令。 响应于探测命令,每个剩余的处理节点检查处理节点是否具有所请求数据的缓存副本。 如果除了源节点和目标节点之外的处理节点找到指定的存储器位置的经修改的缓存副本,则该处理节点以发送到目标节点的存储器取消响应和发送到源节点的读取响应进行响应。 读取响应包含包含所请求数据的经修改的高速缓存块,并且存储器取消响应导致目标节点中止对存储器读取命令的进一步处理,并且停止读取响应的传输,如果目标节点尚未发送 阅读回应。 因此,当系统存储器具有陈旧数据时,存储器取消消息尝试避免相对冗长且耗时的系统存储器访问。
    • 70. 发明授权
    • Training line predictor for branch targets
    • 分支目标的训练线预测
    • US06647490B2
    • 2003-11-11
    • US09419832
    • 1999-10-14
    • James B. KellerPuneet SharmaKeith R. SchakelFrancis M. Matus
    • James B. KellerPuneet SharmaKeith R. SchakelFrancis M. Matus
    • G06F938
    • G06F9/30149G06F9/30054G06F9/3806G06F9/3808G06F9/3816G06F9/382
    • A line predictor caches alignment information for instructions. In response to each fetch address, the line predictor provides alignment information for the instruction beginning at the fetch address, as well as one or more additional instructions subsequent to that instruction. The line predictor may include a memory having multiple entries, each entry storing up to a predefined maximum number of instruction pointers and a fetch address corresponding to the instruction identified by a first one of the instruction pointers. Additionally, each entry may include a link to another entry storing instruction pointers to the next instructions within the predicted instruction stream, and a next fetch address corresponding to the first instruction within the next entry. The next fetch address may be provided to the instruction cache to fetch the corresponding instruction bytes. If the terminating instruction within the entry is a branch instruction, the line predictor is trained with respect to the next fetch address (and next index within the line predictor, which provides the link to the next entry). As line predictor entries are created, a set of branch predictors may be accessed to provide an initial next fetch address and index. The initial training is verified by accessing the branch predictors at each fetch of the line predictor entry, and updated as dictated by the state of the branch predictors at each fetch.
    • 行预测器缓存对齐信息的指令。 响应于每个提取地址,行预测器提供从取指址开始的指令的对齐信息,以及该指令之后的一个或多个附加指令。 线预测器可以包括具有多个条目的存储器,每个条目存储多达预定义的最大数量的指令指针以及与由指令指针中的第一个标识的指令相对应的读取地址。 此外,每个条目可以包括链接到存储指向预测指令流中的下一个指令的指令的另一个条目,以及对应于下一条目中的第一指令的下一个提取地址。 可以将下一个提取地址提供给指令高速缓存以获取对应的指令字节。 如果条目中的终止指令是分支指令,则线路预测器相对于下一个提取地址(以及行预测器中的下一个索引(其提供到下一个条目的链接))进行训练。 当创建线预测值条目时,可以访问一组分支预测器来提供初始的下一个提取地址和索引。 通过在行预测器条目的每次获取时访问分支预测器来验证初始训练,并且根据每次获取时分支预测器的状态来更新初始训练。