会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • SYSTEM AND METHOD FOR OPTIMIZING NEIGHBORING CACHE USAGE IN A MULTIPROCESSOR ENVIRONMENT
    • 在多处理器环境中优化相邻高速缓存的系统和方法
    • US20090164731A1
    • 2009-06-25
    • US11959652
    • 2007-12-19
    • Hien Minh LeJason Alan CoxRobert John DorseyRichard NicholasEric Francis RobinsonThuong Quang Truong
    • Hien Minh LeJason Alan CoxRobert John DorseyRichard NicholasEric Francis RobinsonThuong Quang Truong
    • G06F12/00
    • G06F12/0831
    • A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit. The first PU determines whether to transmit the castout cache line to the second PU based on the response. And, in the event the first PU determines to transmit the castout cache line to the second PU, the first PU transmits the castout cache line to the second PU.
    • 用于管理数据的方法在具有系统存储器和多个处理单元(PU)的数据处理系统中操作,每个PU具有包括多个高速缓存行的高速缓存,每个高速缓存行具有多个相关性状态之一,以及 每个PU耦合到多个PU中的至少另一个。 第一PU选择第一PU的第一高速缓存中的多条高速缓存线的转出高速缓存线,以使第一高速缓冲存储器被抛弃。 第一PU向第二PU发送请求,其中第二PU是第一PU的相邻PU,并且该请求包括所选择的丢弃高速缓存行的第一地址和第一相关性状态。 第二PU确定第一地址是否与第二PU中的任何高速缓存行的地址相匹配。 第二PU基于第二高速缓存中的多条高速缓存行中的每一条的一致性状态以及是否存在地址命中,向第一PU发送响应。 第一PU确定是否根据该响应将丢弃高速缓存行发送到第二PU。 并且,在第一PU确定将丢弃高速缓存行发送到第二PU的情况下,第一PU将丢弃高速缓存行发送到第二PU。
    • 5. 发明授权
    • Method and apparatus for forwarding store data to loads in a pipelined processor
    • 将存储数据转发到流水线处理器中的负载的方法和装置
    • US07640414B2
    • 2009-12-29
    • US11560443
    • 2006-11-16
    • Jason Alan CoxKevin Chih Kang LinEric Francis Robinson
    • Jason Alan CoxKevin Chih Kang LinEric Francis Robinson
    • G06F12/00
    • G06F9/3834G06F12/0802
    • Methods, systems, and computer program products for forwarding store data to loads in a pipelined processor are provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    • 提供了用于将存储数据转发到流水线处理器中的负载的方法,系统和计算机程序产品。 在一个实现中,提供了一种处理器,其包括可解码指令的解码器和可操作以分别从解码器执行解码指令的多个执行单元。 多个执行单元包括可执行解码的加载指令和解码的存储指令并产生相应的加载存储器操作并存储存储器操作的加载/存储执行单元。 存储队列可操作以在一个或多个存储器操作完成之前缓冲一个或多个存储存储器操作,并且存储队列可操作以将缓存在存储队列中的一个或多个存储存储器操作的数据转发到负载 以逐个字节为基础的存储器操作。
    • 9. 发明申请
    • METHOD AND APPARATUS FOR FORWARDING STORE DATA TO LOADS IN A PIPELINED PROCESSOR
    • 用于将存储数据转移到管道处理器中的方法和装置
    • US20080120472A1
    • 2008-05-22
    • US11560443
    • 2006-11-16
    • Jason Alan CoxKevin Chih Kang LinEric Francis Robinson
    • Jason Alan CoxKevin Chih Kang LinEric Francis Robinson
    • G06F9/30G06F12/08
    • G06F9/3834G06F12/0802
    • Methods, systems, and computer program products for forwarding store data to loads in a pipelined processor are provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    • 提供了用于将存储数据转发到流水线处理器中的负载的方法,系统和计算机程序产品。 在一个实现中,提供了一种处理器,其包括可解码指令的解码器和可操作以分别从解码器执行解码指令的多个执行单元。 多个执行单元包括可执行解码的加载指令和解码的存储指令并产生相应的加载存储器操作并存储存储器操作的加载/存储执行单元。 存储队列可操作以在一个或多个存储器操作完成之前缓冲一个或多个存储存储器操作,并且存储队列可操作以将缓存在存储队列中的一个或多个存储存储器操作的数据转发到负载 以逐个字节为基础的存储器操作。
    • 10. 发明授权
    • System and method for optimizing neighboring cache usage in a multiprocessor environment
    • 用于优化多处理器环境中的相邻缓存使用的系统和方法
    • US08296520B2
    • 2012-10-23
    • US11959652
    • 2007-12-19
    • Hien Minh LeJason Alan CoxRobert John DorseyRichard NicholasEric Francis RobinsonThuong Quang Truong
    • Hien Minh LeJason Alan CoxRobert John DorseyRichard NicholasEric Francis RobinsonThuong Quang Truong
    • G06F12/08
    • G06F12/0831
    • A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit. The first PU determines whether to transmit the castout cache line to the second PU based on the response. And, in the event the first PU determines to transmit the castout cache line to the second PU, the first PU transmits the castout cache line to the second PU.
    • 用于管理数据的方法在具有系统存储器和多个处理单元(PU)的数据处理系统中操作,每个PU具有包括多个高速缓存行的高速缓存,每个高速缓存行具有多个相关性状态之一,以及 每个PU耦合到多个PU中的至少另一个。 第一PU选择第一PU的第一高速缓存中的多条高速缓存线的转出高速缓存线,以使第一高速缓冲存储器被抛弃。 第一PU向第二PU发送请求,其中第二PU是第一PU的相邻PU,并且该请求包括所选择的丢弃高速缓存行的第一地址和第一相关性状态。 第二PU确定第一地址是否与第二PU中的任何高速缓存行的地址相匹配。 第二PU基于第二高速缓存中的多条高速缓存行中的每一条的一致性状态以及是否存在地址命中,向第一PU发送响应。 第一PU确定是否根据该响应将丢弃高速缓存行发送到第二PU。 并且,在第一PU确定将丢弃高速缓存行发送到第二PU的情况下,第一PU将丢弃高速缓存行发送到第二PU。