会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Managing multiprocessor operations
    • 管理多处理器操作
    • US07418557B2
    • 2008-08-26
    • US11001476
    • 2004-11-30
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • G06F13/00
    • G06F12/0831
    • In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    • 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。
    • 2. 发明授权
    • Coordination of multiprocessor operations with shared resources
    • 多处理器操作与共享资源协调
    • US07650467B2
    • 2010-01-19
    • US12052569
    • 2008-03-20
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • G06F13/00
    • G06F12/0831
    • In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    • 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。
    • 3. 发明申请
    • Managing multiprocessor operations
    • 管理多处理器操作
    • US20060117147A1
    • 2006-06-01
    • US11001476
    • 2004-11-30
    • Stephen BlinickYu-Cheng HsuLucien MirabeauRicky RankinCheng-Chung Song
    • Stephen BlinickYu-Cheng HsuLucien MirabeauRicky RankinCheng-Chung Song
    • G06F12/14
    • G06F12/0831
    • In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    • 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。
    • 8. 发明授权
    • Handling multiple data transfer requests within a computer system
    • 处理计算机系统内的多个数据传输请求
    • US07469305B2
    • 2008-12-23
    • US11533587
    • 2006-09-20
    • Lucien MirabeauTiep Q. Pham
    • Lucien MirabeauTiep Q. Pham
    • G06F13/28G06F3/00
    • G06F13/28
    • In response to multiple data transfer requests from an application, a data definition (DD) chain is generated. The DD chain is divided into multiple DD sub-blocks by determining a bandwidth of channels (BOC) and whether the BOC is less than the DD chain. If so, the DD chain is divided by the available DMA engines. If not, the DD chain is divided by an optimum atomic transfer unit (OATU). If the division yields a remainder, the remainder is added to a last DD sub-block. If the remainder is less than a predetermined value, the size of the last DD sub-block is set to the OATU plus the remainder. Otherwise, the size of the last DD sub-block is set to the remainder. The DD sub-blocks are subsequently loaded into a set of available DMA engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.
    • 响应于来自应用的多个数据传输请求,生成数据定义(DD)链。 DD链通过确定信道带宽(BOC)以及BOC是否小于DD链分为多个DD子块。 如果是这样,则DD链由可用的DMA引擎划分。 如果不是,DD链被最佳原子转移单位(OATU)除。 如果除法产生余数,则余数被添加到最后一个DD子块。 如果余数小于预定值,则将最后DD子块的大小设置为OATU加余数。 否则,最后一个DD子块的大小设置为余数。 DD子块随后被加载到一组可用的DMA引擎中。 每个可用的DMA引擎在相应的DD子块上执行数据传输,直到整个DD链已经完成。
    • 9. 发明申请
    • COORDINATION OF MULTIPROCESSOR OPERATIONS WITH SHARED RESOURCES
    • 使用共享资源协调多个运营商的运营
    • US20080168238A1
    • 2008-07-10
    • US12052569
    • 2008-03-20
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • Stephen LaRoux BlinickYu-Cheng HsuLucien MirabeauRicky Dean RankinCheng-Chung Song
    • G06F12/00
    • G06F12/0831
    • In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    • 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。