会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and system for speculatively processing a load instruction before completion of a preceding synchronization instruction
    • 在完成前一同步指令之前推测加工指令的方法和系统
    • US06484230B1
    • 2002-11-19
    • US09161640
    • 1998-09-28
    • Brian R. KonigsburgAlexander Edward OkpiszThomas Albert PetersenBruce Joseph Ronchetti
    • Brian R. KonigsburgAlexander Edward OkpiszThomas Albert PetersenBruce Joseph Ronchetti
    • G06F1300
    • G06F12/0831
    • A method and system of facilitating storage accesses within a multiprocessor system subsequent to a synchronization instruction by a local processor consists of determining if data for the storage accesses is cacheable and if there is a “hit” in a cache. If both conditions are met, the storage accesses return the data to the local processor. The storage accesses have an entry on an interrupt table which is used to discard the returned data if a snoop kills the line before the synchronization instruction completes. After the cache returns data, a return data bit is set in the interrupt table. A snoop killing the line sets a snooped bit in the interrupt table. Upon completion of the synchronization instruction, any entries in the interrupt table subsequent to the synchronization instruction that have the return data bit and snooped bit set are flushed. The flush occurs because the data returned to the local processor due to a “cacheable hit” subsequent to the synchronization instruction was out of order with the snoop and the processor must flush the data and go back out to the system bus for the new data.
    • 在本地处理器的同步指令之后,便于在多处理器系统内存储访问的方法和系统包括确定用于存储访问的数据是否可高速缓存以及高速缓存中是否存在“命中”。 如果满足这两个条件,存储访问将数据返回到本地处理器。 存储访问在中断表上有一个条目,如果在同步指令完成之前窥探杀死了该行,则该条目用于丢弃返回的数据。 缓存返回数据后,在中断表中设置返回数据位。 一个窥探杀死线路在中断表中设置一个窥探的位。 完成同步指令之后,刷新具有返回数据位和被监听位置位的同步指令之后的中断表中的任何条目。 发生冲突是因为由于在同步指令之后的“可缓存命中”返回到本地处理器的数据与snoop不一致,并且处理器必须刷新数据并返回到系统总线以获取新数据。
    • 2. 发明授权
    • Data received before coherency window for a snoopy bus
    • 在Snoopy总线的一致性窗口之前收到的数据
    • US06898675B1
    • 2005-05-24
    • US09138380
    • 1998-08-24
    • Alexander Edward OkpiszThomas Albert Petersen
    • Alexander Edward OkpiszThomas Albert Petersen
    • G06F12/00G06F12/08
    • G06F12/0831
    • Where a null response can be expected from devices snooping a load operation, data may be used by a requesting processor prior to the coherency response window. A null snoop response may be determined, for example, from the availability of the data without a bus transaction. The capability of accelerating data in this fashion requires only a few simple changes in processor state transitions, required to permit entry of the data completion wait state prior to the response wait state. Processors may forward accelerated data to execution units with the expectation that a null snoop response will be received during the coherency response window. If a non-null snoop response is received, an error condition is asserted. Data acceleration of the type described allows critical data to get back to the processor without waiting for the coherency response window.
    • 在设备侦听加载操作时可以期望空响应的情况下,请求处理器可以在相干性响应窗口之前使用数据。 可以例如从没有总线事务的数据的可用性来确定空窥探响应。 以这种方式加速数据的能力仅需要处理器状态转换的几个简单的改变,这是允许在响应等待状态之前输入数据完成等待状态所需要的。 处理器可以将加速数据转发到执行单元,期望在一致性响应窗口期间将接收到空窥探响应。 如果接收到非空窥探响应,则会发出错误条件。 所描述类型的数据加速允许关键数据回到处理器,而不必等待一致性响应窗口。
    • 3. 发明授权
    • Method and system for write-through stores of varying sizes
    • 不同大小的直写存储器的方法和系统
    • US06415362B1
    • 2002-07-02
    • US09303364
    • 1999-04-29
    • James Nolan HardageAlexander Edward OkpiszThomas Albert Petersen
    • James Nolan HardageAlexander Edward OkpiszThomas Albert Petersen
    • G06F1200
    • G06F12/0811G06F12/0831G06F12/0886
    • A method and system for performing write-through store operations of valid data of varying sizes in a data processing system, where the data processing system includes multiple processors that are coupled to an interconnect through a memory hierarchy, where the memory hierarchy includes multiple levels of cache, where at least one lower level of cache of the multiple of levels of cache requires store operations of all valid data of at least a predetermined size. First, it is determined whether or not a write-through store operation is a cache hit in a higher level of cache of the multiple levels of cache. In response to a determination that cache hit has occurred in the higher level of cache, the write-through store operation is merged with data read from the higher level of cache to provide a merged write-through operation of all valid data of at least the predetermined size to a lower level of cache. The merged write-through operation is performed in the lower level of cache, such that write-through operations of varying sizes to a lower level of cache which requires write operations of all valid data of at least a predetermined size are performed with data merged from a higher level of cache.
    • 一种用于在数据处理系统中执行不同大小的有效数据的直写存储操作的方法和系统,其中所述数据处理系统包括通过存储器层级耦合到互连的多个处理器,其中所述存储器层级包括 高速缓存,其中多个级别的高速缓存的至少一个较低级别的高速缓存需要至少预定大小的所有有效数据的存储操作。 首先,确定直写存储操作是否是多级高速缓存的更高级别的高速缓存命中。 响应于在较高级别的缓存中发生高速缓存命中的确定,直写存储操作与从较高级别的高速缓存读取的数据合并,以提供所有有效数据的合并直写操作,所述有效数据至少为 预定大小到较低级别的缓存。 在较低级别的缓存中执行合并的直写操作,使得对需要至少预定大小的所有有效数据的写操作的高速缓存的较低级别的写入操作执行,其中从 更高级的缓存。
    • 4. 发明授权
    • Method and system for eliminating adjacent address collisions on a
pipelined response bus
    • 用于消除流水线响应总线上相邻地址冲突的方法和系统
    • US6122692A
    • 2000-09-19
    • US100353
    • 1998-06-19
    • Alexander Edward OkpiszThomas Albert Petersen
    • Alexander Edward OkpiszThomas Albert Petersen
    • G06F13/16G06F13/14G06F13/376G06F13/42G06F13/00
    • G06F13/4243G06F13/376
    • Described is an apparatus for eliminating early retrying of PAAM address conflicts on a system bus with multiple processors connected by a non-master processor, by comparing addresses of the current master processor to the next transaction to be issued by the non-master processor. If the addresses are the same and a PAAM window is detected, the non-master processor will switch the next transaction type to be issued, to a null type transaction. Even though the addresses match, the PAAM window is ignored for a null type transaction. The null transaction type insertion by the non-master processor reduces the latency of a PAAM self retried operation and avoids a possible livelock condition by breaking the processors out of the livelock. This allows the processors to stop retrying and leave the bus. The processors are able to immediately arbitrate instead of delaying past the astat window and increasing bus latency.
    • 描述了一种用于通过将当前主处理器的地址与由非主处理器发布的下一个事务进行比较来消除在具有由非主处理器连接的多个处理器的系统总线上的早期重试PAAM地址冲突的装置。 如果地址相同并且检测到PAAM窗口,则非主处理器将将要发出的下一个事务类型切换为空类型事务。 即使地址匹配,对于空类型事务,PAAM窗口将被忽略。 非主处理器的空事务类型插入减少了PAAM自我重试操作的延迟,并通过将处理器从活动锁定中断开来避免了可能的活动锁定状态。 这允许处理器停止重试并离开总线。 处理器能够立即进行仲裁,而不是延迟通过超级窗口并增加总线延迟。
    • 6. 发明授权
    • 6XX bus with exclusive intervention
    • 6XX总线独家干预
    • US06324622B1
    • 2001-11-27
    • US09138320
    • 1998-08-24
    • Alexander Edward OkpiszThomas Albert Petersen
    • Alexander Edward OkpiszThomas Albert Petersen
    • G06F1200
    • G06F12/0811G06F12/0833
    • Data loaded from system memory to a cache within a multiprocessor system is set to the exclusive coherency state if no other cache or processor has a copy of that data. Subsequent accesses to the data by another processor or cache which are snooped by the data owner result in an exclusive intervention by the data owner. The data owner sources the data to and shares the data with the requesting device on a read and transfers exclusive ownership of the data to the requesting device on a read with intent to modify. Unmodified intervention with cache-to-cache transfers over possibly much slower accesses to memory is thus supported by the multiprocessor system without requiring additional tag or status bits in the cache directories, saving a significant area.
    • 如果没有其他高速缓存或处理器具有该数据的副本,则从系统存储器加载到多处理器系统内的高速缓存的数据被设置为独占一致性状态。 由数据所有者窥探的另一个处理器或高速缓存对数据的后续访问导致数据所有者的独占干预。 数据所有者在读取时将数据发送给请求设备并与之共享数据,并在意图修改的读取上将数据的独占所有权传送给请求设备。 因此,多处理器系统支持对缓存到缓存传输的未修改干预,而不需要高速缓存目录中的附加标签或状态位,从而节省了重要的区域。