会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Method and apparatus for global ordering to insure latency independent coherence
    • 用于全局排序以确保延迟独立相干性的方法和装置
    • US07644237B1
    • 2010-01-05
    • US10783960
    • 2004-02-20
    • Thomas A. PetersenSanjay Vishin
    • Thomas A. PetersenSanjay Vishin
    • G06F12/08
    • G06F12/0831
    • A method and apparatus is described for insuring coherency between memories in a multi-agent system where the agents are interconnected by one or more fabrics. A global arbiter is used to segment coherency into three phases: request; snoop; and response, and to apply global ordering to the requests. A bus interface having request, snoop, and response logic is provided for each agent. A bus interface having request, snoop and response logic is provided for the global arbiter, and a bus interface is provided to couple the global arbiter to each type of fabric it is responsible for. Global ordering and arbitration logic tags incoming requests from the multiple agents and insures that snoops are responded to according to the global order, without regard to latency differences in the fabrics.
    • 描述了一种用于确保多代理系统中的代理之间的一致性的方法和装置,其中代理由一个或多个结构互连。 全局仲裁器用于将一致性分为三个阶段:请求; 窥探 并响应,并对请求应用全局排序。 为每个代理提供了具有请求,窥探和响应逻辑的总线接口。 为全局仲裁器提供了具有请求,总线和响应逻辑的总线接口,并且提供总线接口以将全局仲裁器耦合到其负责的每种类型的结构。 全局排序和仲裁逻辑标记来自多个代理的传入请求,并确保根据全局顺序对窥探进行响应,而不考虑结构中的延迟差异。
    • 3. 发明申请
    • HORIZONTALLY-SHARED CACHE VICTIMS IN MULTIPLE CORE PROCESSORS
    • 多个核心处理器中的高分辨率高速缓存
    • US20080091880A1
    • 2008-04-17
    • US11681610
    • 2007-03-02
    • Sanjay Vishin
    • Sanjay Vishin
    • G06F12/08
    • G06F12/0806G06F12/0811G06F12/0842Y02D10/13
    • A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.
    • 处理器包括多个处理器核心单元,每个处理器核心单元包括处理器核心和高速缓冲存储器。 从第一处理器核心单元缓存中驱逐的受害者线可能存储在另一个处理器核心单元的缓存中,而不是写回到系统内存。 如果第一处理器核心单元稍后请求受害线,则从其他处理器核心单元的高速缓存检索受害线。 处理器在处理器核心单元之间具有低延迟数据传输。 处理器直接在处理器核心单元的高速缓存之间传输受害者线路,或利用受害者缓存临时存储受害者线路,同时搜索其目的地。 处理器评估缓存优先级规则,以确定是否丢弃受害线路,写回系统内存或存储在其他处理器核心单元的高速缓存中。 缓存优先级规则可以基于缓存一致性数据,负载平衡方案和处理器的体系结构特征。
    • 4. 发明申请
    • Multithreading processor including thread scheduler based on instruction stall likelihood prediction
    • 多线程处理器,包括基于指令失速似然预测的线程调度器
    • US20060179280A1
    • 2006-08-10
    • US11051998
    • 2005-02-04
    • Michael JensenDarren JonesRyan KinterSanjay Vishin
    • Michael JensenDarren JonesRyan KinterSanjay Vishin
    • G06F9/30
    • G06F9/3851G06F9/30087G06F9/3009G06F9/382G06F9/3838G06F9/3861
    • An apparatus for scheduling dispatch of instructions among a plurality of threads being concurrently executed in a multithreading processor is provided. The apparatus includes an instruction decoder that generate register usage information for an instruction from each of the threads, a priority generator that generates a priority for each instruction based on the register usage information and state information of instructions currently executing in an execution pipeline, and selection logic that dispatches at least one instruction from at least one thread based on the priority of the instructions. The priority indicates the likelihood the instruction will execute in the execution pipeline without stalling. For example, an instruction may have a high priority if it has little or no register dependencies or its data is known to be available; or may have a low priority if it has strong register dependencies or is an uncacheable or synchronized storage space load instruction.
    • 提供了一种用于在多线程处理器中同时执行的多个线程之间调度指令调度的装置。 该装置包括:指令解码器,用于生成来自每个线程的指令的寄存器使用信息;基于寄存器使用信息和当前在执行流水线中执行的指令的状态信息生成每个指令的优先级的优先级生成器,以及选择 基于指令的优先级从至少一个线程分派至少一条指令的逻辑。 优先级表示指令在执行流水线中执行的可能性,而不会停顿。 例如,如果指令很少或没有寄存器依赖性或其数据已知可用,则指令可能具有高优先级; 或者如果具有强的寄存器依赖性或者不可缓存或同步的存储空间加载指令,则可以具有低优先级。
    • 5. 发明授权
    • Horizontally-shared cache victims in multiple core processors
    • 在多个核心处理器中的水平共享缓存受害者
    • US08725950B2
    • 2014-05-13
    • US12828056
    • 2010-06-30
    • Sanjay Vishin
    • Sanjay Vishin
    • G06F12/08
    • G06F12/0806G06F12/0811G06F12/0842Y02D10/13
    • A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.
    • 处理器包括多个处理器核心单元,每个处理器核心单元包括处理器核心和高速缓冲存储器。 从第一处理器核心单元缓存中驱逐的受害者线可能存储在另一个处理器核心单元的缓存中,而不是写回到系统内存。 如果第一处理器核心单元稍后请求受害线,则从其他处理器核心单元的高速缓存检索受害线。 处理器在处理器核心单元之间具有低延迟数据传输。 处理器直接在处理器核心单元的高速缓存之间传输受害者线路,或利用受害者缓存临时存储受害者线路,同时搜索其目的地。 处理器评估缓存优先级规则,以确定是否丢弃受害线路,写回系统内存或存储在其他处理器核心单元的高速缓存中。 缓存优先级规则可以基于缓存一致性数据,负载平衡方案和处理器的体系结构特征。
    • 6. 发明授权
    • Method and apparatus for global ordering to insure latency independent coherence
    • 用于全局排序以确保延迟独立相干性的方法和装置
    • US08037253B2
    • 2011-10-11
    • US12557421
    • 2009-09-10
    • Thomas A. PetersenSanjay Vishin
    • Thomas A. PetersenSanjay Vishin
    • G06F12/08
    • G06F12/0831
    • A method and apparatus is described for insuring coherency between memories in a multi-agent system where the agents are interconnected by one or more fabrics. A global arbiter is used to segment coherency into three phases: request; snoop; and response, and to apply global ordering to the requests. A bus interface having request, snoop, and response logic is provided for each agent. A bus interface having request, snoop and response logic is provided for the global arbiter, and a bus interface is provided to couple the global arbiter to each type of fabric it is responsible for. Global ordering and arbitration logic tags incoming requests from the multiple agents and insures that snoops are responded to according to the global order, without regard to latency differences in the fabrics.
    • 描述了一种用于确保多代理系统中的代理之间的一致性的方法和装置,其中代理由一个或多个结构互连。 全局仲裁器用于将一致性分为三个阶段:请求; 窥探 并响应,并对请求应用全局排序。 为每个代理提供了具有请求,窥探和响应逻辑的总线接口。 为全局仲裁器提供了具有请求,总线和响应逻辑的总线接口,并且提供总线接口以将全局仲裁器耦合到其负责的每种类型的结构。 全局排序和仲裁逻辑标记来自多个代理的传入请求,并确保根据全局顺序对窥探进行响应,而不考虑结构中的延迟差异。
    • 7. 发明授权
    • System and method for dynamic voltage scaling in a GPS receiver
    • 用于GPS接收机动态电压缩放的系统和方法
    • US08009090B2
    • 2011-08-30
    • US12435591
    • 2009-05-05
    • Sanjay VishinSteve Gronemeyer
    • Sanjay VishinSteve Gronemeyer
    • G01S19/24
    • G01S19/34G01S19/24
    • Systems and methods are disclosed herein to dynamically vary supply voltages and clock frequencies, also known as dynamic voltage scaling (DVS), in GPS receivers to minimize receiver power consumption while meeting performance requirements. For the baseband circuitry performing satellite acquisition and tracking, supply voltages and clock frequencies to the baseband circuitry are dynamically adjusted as a function of signal processing requirements and operating conditions for reducing baseband power consumption. Similarly, the supply voltage and clock frequency to the processor running navigation software and event processing are dynamically adjusted as a function of navigation performance requirements and event occurrences to reduce processor power consumption.
    • 本文公开的系统和方法在GPS接收机中动态地改变电源电压和时钟频率(也称为动态电压缩放(DVS)),以在满足性能要求的同时最小化接收机功耗。 对于执行卫星采集和跟踪的基带电路,基带电路的电源电压和时钟频率作为信号处理要求和降低基带功耗的操作条件的函数进行动态调整。 类似地,运行导航软件和事件处理的处理器的电源电压和时钟频率根据导航性能要求和事件发生而被动态地调整,以减少处理器功耗。
    • 9. 发明申请
    • Preventing Writeback Race in Multiple Core Processors
    • 防止多核处理器中的回写竞争
    • US20080320232A1
    • 2008-12-25
    • US11767225
    • 2007-06-22
    • Sanjay VishinAdam Stoler
    • Sanjay VishinAdam Stoler
    • G06F12/08
    • G06F12/128G06F12/0806G06F12/0815
    • A processor prevents writeback race condition errors by maintaining responsibility for data until the writeback request is confirmed by an intervention message from a cache coherency manager. If a request for the same data arrives before the intervention message, the processor core unit provides the requested data and cancels the pending writeback request. The cache coherency data associated with cache lines indicates whether a request for data has been received prior to the intervention message associated with the writeback request. The cache coherency data of a cache line has a value of “modified” when the writeback request is initiated. When the intervention message associated with the writeback request is received, the cache lines's cache coherency data is examined. A change in the cache coherency data from the value of “modified” indicates that the request for data has been received prior to the intervention and the writeback request should be cancelled.
    • 处理器通过维护对数据的责任来防止回写竞争条件错误,直到由缓存一致性管理器的干预消息确认回写请求。 如果相同数据的请求在干预消息之前到达,则处理器核心单元提供所请求的数据并取消挂起的回写请求。 与高速缓存行相关联的高速缓存一致性数据指示在与回写请求相关联的干预消息之前是否已经接收到对数据的请求。 高速缓存行的高速缓存一致性数据在启动回写请求时具有“修改”值。 当接收到与回写请求相关联的干预消息时,检查高速缓存行的高速缓存一致性数据。 来自“修改”值的高速缓存一致性数据的变化表示在干预之前已经接收到数据请求,并且应该取消回写请求。
    • 10. 发明授权
    • Cache flush operation for a stack-based microprocessor
    • 基于堆栈的微处理器的缓存刷新操作
    • US06219757B1
    • 2001-04-17
    • US09032396
    • 1998-02-27
    • Sanjay Vishin
    • Sanjay Vishin
    • G06F1200
    • G06F12/0875G06F12/0891
    • A method for flushing the data cache in a microprocessor. A central processing unit in the microprocessor is used to perform an operation on a first address stored in a stack cache, the address being associated with a first cache line in a data cache memory. The result of the operation is left on the top of the stack in the stack cache as a second address. A first valid bit associated with the first cache line is changed from a valid setting to an invalid setting during the same clock cycle of the microprocessor in which the operation is performed.
    • 一种用于在微处理器中刷新数据高速缓存的方法。 微处理器中的中央处理单元用于对存储在堆栈高速缓冲存储器中的第一地址执行操作,该地址与数据高速缓存存储器中的第一高速缓存行相关联。 操作的结果留在堆栈缓存中的堆栈顶部作为第二个地址。 在执行操作的微处理器的相同时钟周期期间,与第一高速缓存行相关联的第一有效位从有效设置改变为无效设置。