会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 63. 发明授权
    • Data processing system having demand based write through cache with
enforced ordering
    • 数据处理系统具有基于需求的写入通过缓存执行排序
    • US5796979A
    • 1998-08-18
    • US730994
    • 1996-10-16
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJerry Don Lewis
    • G06F12/08G06F13/12
    • G06F12/0866G06F12/0815G06F12/0835G06F2212/303
    • A data processing system includes a processor, a system memory, one or more input/output channel controllers (IOCC), and a system bus connecting the processor, the memory and the IOCCs together for communicating instructions, address and data between the various elements of a system. The IOCC includes a paged cache storage having a number of lines wherein each line of the page may be, for example, 32 bytes. Each page in the cache also has several attribute bits for that page including the so called WIM and attribute bits. The W bit is for controlling write through operations; the I bit controls cache inhibit; and the M bit controls memory coherency. Since the IOCC is unaware of these page table attribute bits for the cache lines being DMAed to system memory, IOCC must maintain memory consistency and cache coherency without sacrificing performance. For DMA write data to system memory, new cache attributes called global, cachable and demand based write through are created. Individual writes within a cache line are gathered by the IOCC and only written to system memory when the I/O bus master accesses a different cache line or relinquishes the I/O bus.
    • 数据处理系统包括处理器,系统存储器,一个或多个输入/输出通道控制器(IOCC)以及将处理器,存储器和IOCC连接在一起的系统总线,用于在各种元件之间传送指令,地址和数据 一个系统。 IOCC包括具有多行的分页缓存存储器,其中页面的每行可以是例如32字节。 缓存中的每个页面还具有该页面的几个属性位,包括所谓的WIM和属性位。 W位用于控制写操作; I位控制缓存抑制; M位控制存储器一致性。 由于IOCC不知道将这些页表属性位用于高速缓存行被DMA映射到系统内存,因此IOCC必须保持内存一致性和高速缓存一致性,而不会牺牲性能。 对于将DMA写入数据到系统内存,创建了称为全局,可高速缓存和基于需求的写入的新缓存属性。 高速缓存行中的单独写入由IOCC收集,只有当I / O总线主机访问不同的高速缓存行或放弃I / O总线时才写入系统存储器。
    • 64. 发明授权
    • Data processing system and method for resolving a conflict between requests to modify a shared cache line
    • 用于解决修改共享缓存行的请求之间的冲突的数据处理系统和方法
    • US06763434B2
    • 2004-07-13
    • US09752947
    • 2000-12-30
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieDerek Edward Williams
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieDerek Edward Williams
    • G06F1200
    • G06F12/0831
    • Disclosed herein are a data processing system and method of operating a data processing system that arbitrate between conflicting requests to modify data cached in a shared state and that protect ownership of the cache line granted during such arbitration until modification of the data is complete. The data processing system includes a plurality of agents coupled to an interconnect that supports pipelined transactions. While data associated with a target address are cached at a first agent among the plurality of agents in a shared state, the first agent issues a transaction on the interconnect. In response to snooping the transaction, a second agent provides a snoop response indicating that the second agent has a pending conflicting request and a coherency decision point provides a snoop response granting the first agent ownership of the data. In response to the snoop responses, the first agent is provided with a combined response representing a collective response to the transaction of all of the agents that grants the first agent ownership of the data. In response to the combined response, the first agent is permitted to modify the data.
    • 这里公开了一种数据处理系统和方法,该数据处理系统和方法在数据处理系统之间进行仲裁,以便在冲突的请求之间进行仲裁,以修改在共享状态下缓存的数据,并保护在此类仲裁期间授予的高速缓存行的所有权,直到数据修改完成。 数据处理系统包括耦合到支持流水线交易的互连的多个代理。 虽然与目标地址相关联的数据在共享状态的多个代理之间的第一代理处被高速缓存,但第一代理在互连上发布事务。 响应于窥探事务,第二代理提供窥探响应,指示第二代理具有待决冲突请求,并且一致性决策点提供准备数据的第一代理所有权的窥探响应。 响应于窥探响应,向第一代理提供组合的响应,其表示对授予数据的第一代理所有权的所有代理的交易的集体响应。 响应于组合的响应,允许第一代理修改数据。
    • 67. 发明授权
    • Multiprocessor speculation mechanism for efficiently managing multiple barrier operations
    • 用于有效管理多个屏障操作的多处理器推测机制
    • US06625660B1
    • 2003-09-23
    • US09588605
    • 2000-06-06
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • G06F1516
    • G06F9/30087G06F9/383G06F9/3834G06F9/3842G06F9/3867
    • Disclosed is a method of operation within a processor that permits load instructions to be issued speculatively. An instruction sequence is received that includes multiple barrier instructions and a load instruction that follows the barrier instructions in the instruction sequence. In response to the multiple barrier instructions, barrier operations are issued on an interconnect coupled to the processor. Also, while the barrier operations are pending, a load request associated with the load instruction is speculatively issued. When the load request is issued, a flag is set to indicate that it was speculatively issued. The flag is reset when acknowledgments of all the barrier operations are received. Data that is returned before the acknowledgments are received is temporarily held and forwarded to the register and/or execution unit of the processor only after the acknowledgments are received. If a snoop invalidate is detected for the speculatively issued load request before completion of the barrier operations, the data is discarded and the load request is re-issued.
    • 公开了一种在处理器内操作的方法,其允许以推测方式发布加载指令。 接收包括多个屏障指令和跟随指令序列中的屏障指令的加载指令的指令序列。 响应于多个屏障指令,在耦合到处理器的互连上发出屏障操作。 此外,当屏障操作正在等待时,推测性地发出与加载指令相关联的加载请求。 当发出加载请求时,会设置一个标志来指示它被推测发出。 当接收到所有屏障操作的确认时,该标志被复位。 在接收到确认之前返回的数据被暂时保存,并且在接收到确认之后被转发到处理器的寄存器和/或执行单元。 如果在完成屏障操作之前,对于推测发出的加载请求检测到窥探无效,则丢弃数据并重新发出加载请求。
    • 70. 发明授权
    • Scarfing within a hierarchical memory architecture
    • 在分层内存架构中进行扫描
    • US06587924B2
    • 2003-07-01
    • US09903727
    • 2001-07-12
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831Y10S707/99931
    • A method and system for scarfing data during a data access transaction within a hierarchical data storage system. A data access request is delivered from a source device to a plurality of data storage devices. The access request includes a target address and a source path tag, wherein the source path tag includes a device identification tag that uniquely identifies a data storage device within a given level of the system traversed by the access request. A device identification tag that uniquely identifies the third party transactor within a given memory level is appended to the source path tag such that the third party transactor can scarf returning data without reserving a scarf queue entry.
    • 一种用于在分层数据存储系统内的数据访问事务期间对数据进行分页的方法和系统。 数据访问请求从源设备传送到多个数据存储设备。 访问请求包括目标地址和源路径标签,其中源路径标签包括唯一地标识由访问请求遍历的系统的给定级别内的数据存储设备的设备标识标签。 唯一地标识给定存储器级别内的第三方交易者的设备识别标签被附加到源路径标签,使得第三方交易者可以围绕返回数据而不预留围巾队列条目。