会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Method and apparatus with page buffer and I/O page kill definition for improved DMA and L1/L2 cache performance
    • 具有页面缓冲区和I / O页面禁止定义的方法和设备,用于改进DMA和L1 / L2缓存性能
    • US06338119B1
    • 2002-01-08
    • US09282631
    • 1999-03-31
    • Gary Dean AndersonRonald Xavier ArroyoBradly George FreyGuy Lynn Guthrie
    • Gary Dean AndersonRonald Xavier ArroyoBradly George FreyGuy Lynn Guthrie
    • G06F1300
    • G06F12/0835G06F13/28
    • A method and apparatus for improving direct memory access and cache performance utilizing a special Input/Output or “I/O” page, defined as having a large size (e.g., 4 Kilobytes or 4 Kb), but with distinctive cache line characteristics. For Direct Memory Access (DMA) reads, the first cache line in the I/O page may be accessed, by a Peripheral Component Interconnect (PCI) Host Bridge, as a cacheable read and all other lines are non-cacheable access (DMA Read with no intent to cache). For DMA writes, the PCI Host Bridge accesses all cache lines as cacheable. The PCI Host Bridge maintains a cache snoop granularity of the I/O page size for data, which means that if the Host Bridge detects a store (invalidate) type system bus operation on any cache line within an I/O page, cached data within that page is invalidated, the Level 1 and Level 2 ((L1/L2) caches continue to treat all cache lines in this page as cacheable). By defining the first line as cacheable, only one cache line need be invalidated on the system bus by the L1/L2 cache in order to cause invalidation of the whole page of data in the PCI Host Bridge. All stores to the other cache lines in the I/O Page can occur directly in the L1/L2 cache without system bus operations, since these lines have been left in the ‘modified’ state in the L1/L2 cache.
    • 使用定义为具有大尺寸(例如4千字节或4Kb)但具有鲜明的高速缓存行特征的特殊输入/输出或“I / O”页来改善直接存储器访问和高速缓存性能的方法和装置。 对于直接存储器访问(DMA)读取,I / O页面中的第一个高速缓存行可以由外围组件互连(PCI)主机桥访问,作为可缓存读取,并且所有其他行都是非高速缓存访​​问(DMA读取 没有意图缓存)。 对于DMA写操作,PCI主机桥可以将所有缓存行访问为可缓存的。 PCI主机桥保持数据的I / O页面大小的缓存窥探粒度,这意味着如果主机桥检测到I / O页面中的任何缓存行上的存储(无效)类型的系统总线操作,则缓存数据 该页面无效,级别1和级别2((L1 / L2)高速缓存继续将该页面中的所有高速缓存行视为可缓存)。 通过将第一行定义为可缓存的,L1 / L2缓存只能在系统总线上只有一条高速缓存行无效,从而导致PCI主机桥中整个数据页的无效。 所有存储到I / O页面中的其他高速缓存行的存储可以直接发生在L1 / L2高速缓存中,而不需要系统总线操作,因为这些行在L1 / L2缓存中保持“修改”状态。
    • 7. 发明授权
    • Method and apparatus for managing memory operations in a data processing system using a store buffer
    • 一种使用存储缓冲器来管理数据处理系统中的存储器操作的方法和装置
    • US06748493B1
    • 2004-06-08
    • US09201214
    • 1998-11-30
    • Ronald Xavier ArroyoWilliam E. BurkyJody Bern Joyner
    • Ronald Xavier ArroyoWilliam E. BurkyJody Bern Joyner
    • G06F1300
    • G06F12/0835G06F13/1663G06F13/1673
    • A shared memory multiprocessor (SMP) data processing system includes a store buffer implemented in a memory controller for temporarily storing recently accessed memory data within the data processing system. The memory controller includes control logic for maintaining coherency between the memory controller's store buffer and memory. The memory controller's store buffer is configured into one or more arrays sufficiently mapped to handle I/O and CPU bandwidth requirements. The combination of the store buffer and the control logic operates as a front end within the memory controller in that all memory requests are first processed by the control logic/store buffer combination for reducing memory latency and increasing effective memory bandwidth by eliminating certain memory read and write operations.
    • 共享存储器多处理器(SMP)数据处理系统包括实现在存储器控制器中的存储缓冲器,用于在数据处理系统内临时存储最近访问的存储器数据。 存储器控制器包括用于维持存储器控制器的存储缓冲器和存储器之间的一致性的控制逻辑。 存储器控制器的存储缓冲器被配置成一个或多个阵列,其被充分映射以处理I / O和CPU带宽要求。 存储缓冲器和控制逻辑的组合作为存储器控制器中的前端操作,因为所有存储器请求首先由控制逻辑/存储缓冲器组合处理,以减少存储器延迟并通过消除某些存储器读取和增加有效的存储器带宽 写操作。