会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Write barrier system and method for trapping garbage collection page
boundary crossing pointer stores
    • 写入屏障系统和方法,用于捕获垃圾收集页边界指针存储
    • US5845298A
    • 1998-12-01
    • US841544
    • 1997-04-23
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • G06F12/00G06F12/02G06F17/30
    • G06F12/0276Y10S707/99945Y10S707/99957
    • Architectural support is provided for trapping of garbage collection page boundary crossing pointer stores. Identification of pointer stores as boundary crossing is performed by a store barrier responsive to a garbage collection page mask that is programmably encoded to define a garbage collection page size. The write barrier and garbage collection page mask provide a programmably-flexible definition of garbage collection page size and therefore of boundary crossing pointer stores to be trapped, affording a garbage collector implementer with support for a wide variety of generational garbage collection methods, including train algorithm type methods to managing mature portions of a generationally collected memory space. Pointer specific store instruction replacement allows implementations that provide an exact barrier not only to pointer stores, but more particularly to pointer stores crossing programmably defined garbage collection page boundaries.
    • 垃圾收集页边界指针存储器的建筑支持。 通过响应于可编程编码以定义垃圾收集页面大小的垃圾收集页面掩码的存储屏障来执行指针存储器作为边界交叉的识别。 写屏障和垃圾收集页面掩码提供了可编程灵活的垃圾收集页面大小的定义,因此要跨越的跨界指针存储被捕获,提供了一个垃圾收集器实现者,支持各种各样的代代垃圾收集方法,包括列车算法 类型方法来管理代收收集的内存空间的成熟部分。 指针特定存储指令替换允许不仅提供指针存储的精确屏障的实现,更具体地说,涉及可编程定义的垃圾收集页边界的指针存储。
    • 22. 发明授权
    • Instruction/skid buffers in a multithreading microprocessor that store dispatched instructions to avoid re-fetching flushed instructions
    • 多线程微处理器中的指令/滑动缓冲区,用于存储调度的指令,以避免重新获取刷新的指令
    • US07853777B2
    • 2010-12-14
    • US11051978
    • 2005-02-04
    • Darren M. JonesRyan C. KinterG. Michael UhlerSanjay Vishin
    • Darren M. JonesRyan C. KinterG. Michael UhlerSanjay Vishin
    • G06F9/30
    • G06F9/3851G06F9/3802
    • An apparatus for reducing instruction re-fetching in a multithreading processor configured to concurrently execute a plurality of threads is disclosed. The apparatus includes a buffer for each thread that stores fetched instructions of the thread, having an indicator for indicating which of the fetched instructions in the buffer have already been dispatched for execution. An input for each thread indicates that one or more of the already-dispatched instructions in the buffer has been flushed from execution. Control logic for each thread updates the indicator to indicate the flushed instructions are no longer already-dispatched, in response to the input. This enables the processor to re-dispatch the flushed instructions from the buffer to avoid re-fetching the flushed instructions. In one embodiment, there are fewer buffers than threads, and they are dynamically allocatable by the threads. In one embodiment, a single integrated buffer is shared by all the threads.
    • 公开了一种用于减少配置为并发执行多个线程的多线程处理器中的指令重新获取的装置。 该装置包括用于存储线程的获取的指令的每个线程的缓冲器,具有指示器,用于指示缓冲器中的哪个获取的指令已经被分派以执行。 每个线程的输入表示缓冲区中已经调度的指令中的一个或多个已经从执行刷新。 每个线程的控制逻辑更新指示器,以指示刷新的指令不再已经被调度,以响应输入。 这使得处理器能够从缓冲区重新分配刷新的指令,以避免重新获取刷新的指令。 在一个实施例中,存在比线程少的缓冲器,并且它们可被线程动态地分配。 在一个实施例中,单个集成缓冲器被所有线程共享。
    • 23. 发明授权
    • Horizontally-shared cache victims in multiple core processors
    • 在多个核心处理器中的水平共享缓存受害者
    • US07774549B2
    • 2010-08-10
    • US11681610
    • 2007-03-02
    • Sanjay Vishin
    • Sanjay Vishin
    • G06F12/12G06F13/00
    • G06F12/0806G06F12/0811G06F12/0842Y02D10/13
    • A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.
    • 处理器包括多个处理器核心单元,每个处理器核心单元包括处理器核心和高速缓冲存储器。 从第一处理器核心单元缓存中驱逐的受害者线可能存储在另一个处理器核心单元的缓存中,而不是写回到系统内存。 如果第一处理器核心单元稍后请求受害线,则从其他处理器核心单元的高速缓存检索受害线。 处理器在处理器核心单元之间具有低延迟数据传输。 处理器直接在处理器核心单元的高速缓存之间传输受害者线路,或利用受害者缓存临时存储受害者线路,同时搜索其目的地。 处理器评估缓存优先级规则,以确定是否丢弃受害线路,写回系统内存或存储在其他处理器核心单元的高速缓存中。 缓存优先级规则可以基于缓存一致性数据,负载平衡方案和处理器的体系结构特征。
    • 24. 发明授权
    • Leaky-bucket thread scheduler in a multithreading microprocessor
    • 多线程微处理器中的泄漏线程调度程序
    • US07752627B2
    • 2010-07-06
    • US11051980
    • 2005-02-04
    • Darren M. JonesRyan C. KinterThomas A. PetersenSanjay Vishin
    • Darren M. JonesRyan C. KinterThomas A. PetersenSanjay Vishin
    • G06F9/46G06F9/30
    • G06F9/3851G06F9/3814G06F9/3861G06F9/4881Y02D10/24
    • A leaky-bucket style thread scheduler for scheduling concurrent execution of multiple threads in a microprocessor is provided. The execution pipeline notifies the scheduler when it has completed instructions. The scheduler maintains a virtual water level for each thread and decreases it each time the execution pipeline executes an instruction of the thread. The scheduler includes an instruction execution rate for each thread. The scheduler increases the virtual water level based on the requested rate per a predetermined number of clock cycles. The scheduler includes virtual water pressure parameters that define a set of virtual water pressure ranges over the height of the virtual water bucket. When a thread's virtual water level moves from one virtual water pressure range to the next higher range, the scheduler increases the instruction issue priority for the thread; conversely, when the level moves down, the scheduler decreases the instruction issue priority for the thread.
    • 提供了一种漏斗式线程调度器,用于调度微处理器中多个线程的并发执行。 执行流水线在完成指令后通知调度程序。 调度程序为每个线程维护一个虚拟水位,并在每次执行流水线执行线程指令时减小它。 调度器包括每个线程的指令执行率。 调度器基于每预定数量的时钟周期的请求速率增加虚拟水位。 调度器包括限定虚拟水桶高度上的一组虚拟水压力范围的虚拟水压参数。 当线程的虚拟水位从一个虚拟水压范围移动到下一个较高范围时,调度程序增加线程的指令发出优先级; 相反,当电平下降时,调度器减少线程的指令发出优先级。
    • 25. 发明申请
    • Method and Apparatus for Global Ordering to Insure Latency Independent Coherence
    • 全球订购方法和设备,以确保延迟独立相干性
    • US20100005247A1
    • 2010-01-07
    • US12557421
    • 2009-09-10
    • Thomas A. PetersenSanjay Vishin
    • Thomas A. PetersenSanjay Vishin
    • G06F12/08
    • G06F12/0831
    • A method and apparatus is described for insuring coherency between memories in a multi-agent system where the agents are interconnected by one or more fabrics. A global arbiter is used to segment coherency into three phases: request; snoop; and response, and to apply global ordering to the requests. A bus interface having request, snoop, and response logic is provided for each agent. A bus interface having request, snoop and response logic is provided for the global arbiter, and a bus interface is provided to couple the global arbiter to each type of fabric it is responsible for. Global ordering and arbitration logic tags incoming requests from the multiple agents and insures that snoops are responded to according to the global order, without regard to latency differences in the fabrics.
    • 描述了一种用于确保多代理系统中的代理之间的一致性的方法和装置,其中代理由一个或多个结构互连。 全局仲裁器用于将一致性分为三个阶段:请求; 窥探 并响应,并对请求应用全局排序。 为每个代理提供了具有请求,窥探和响应逻辑的总线接口。 为全局仲裁器提供了具有请求,总线和响应逻辑的总线接口,并且提供总线接口以将全局仲裁器耦合到其负责的每种类型的结构。 全局排序和仲裁逻辑标记来自多个代理的传入请求,并确保根据全局顺序对窥探进行响应,而不考虑结构中的延迟差异。
    • 26. 发明授权
    • Smart memory based synchronization controller for a multi-threaded multiprocessor SoC
    • 用于多线程多处理器SoC的基于智能内存的同步控制器
    • US07594089B2
    • 2009-09-22
    • US10955231
    • 2004-09-30
    • Sanjay VishinKevin D. KissellDarren M. JonesRyan C. Kinter
    • Sanjay VishinKevin D. KissellDarren M. JonesRyan C. Kinter
    • G06F12/00G06F13/00G06F13/28
    • G06F12/1416G06F13/1642
    • A memory interface for use with a multiprocess memory system having a gating memory, the gating memory associating one or more memory access methods with each of a plurality of memory locations of the memory system wherein the gating memory returns a particular one access method for a particular one memory location responsive to a memory access instruction relating to the particular one memory location, the interface including: a request storage for storing a plurality of concurrent memory access instructions for one or more of the particular memory locations, each the memory access instruction issued from an associated independent thread context; an arbiter, coupled to the request storage, for selecting a particular one of the memory access instructions to apply to the gating memory; and a controller, coupled to the request storage and to the arbiter, for: storing the plurality of memory access instructions in the request storage; initiating application of the particular one memory access instruction selected by the arbiter to the gating memory; receiving the particular one access method associated with the particular one memory access method from the gating memory; and initiating a communication of the particular access method to the thread context associated with the particular one access instruction.
    • 一种与具有门控存储器的多处理存储器系统一起使用的存储器接口,门控存储器将一个或多个存储器访问方法与存储器系统的多个存储器位置中的每一个相关联,其中门控存储器返回特定的一个存取方法 一个存储器位置响应于与特定一个存储器位置相关的存储器访问指令,该接口包括:存储用于特定存储器位置中的一个或多个的多个并发存储器访问指令的请求存储器,每个存储器访问指令从 相关联的独立线程上下文; 耦合到所述请求存储器的仲裁器,用于选择所述存储器访问指令中的特定一个以应用于所述门控存储器; 以及耦合到请求存储器和仲裁器的控制器,用于:将多个存储器访问指令存储在请求存储器中; 启动由仲裁器选择的特定一个存储器访问指令到门控存储器的应用; 从门控存储器接收与特定一个存储器访问方法相关联的特定一个访问方法; 以及发起特定访问方法与特定一个访问指令相关联的线程上下文的通信。
    • 27. 发明申请
    • Avoiding Livelock Using A Cache Manager in Multiple Core Processors
    • 在多核处理器中使用缓存管理器避免使用Livelock
    • US20080320230A1
    • 2008-12-25
    • US11767247
    • 2007-06-22
    • Sanjay VishinRyan C. Kinter
    • Sanjay VishinRyan C. Kinter
    • G06F12/08
    • G06F12/0817Y02D10/13
    • Livelocks are prevented in multiple core processors by verifying that a data access request is still valid before sending messages to processor cores that may cause other data access requests to fail. A cache coherency manager receives data access requests from multiple processor cores. Upon receiving a data access request that may cause a livelock, the cache coherency manager first sends an intervention message back to the requesting processor core to confirm that this data access request will succeed. If the requesting processor core determines that the data access request is still valid, it directs the cache coherency manager to proceed with the data access request. The cache coherency manager may then send intervention messages to other processor cores to complete the data access request. If the requesting processor core determines that the data access request is invalid, it directs the cache coherency manager to abandon the data access request.
    • 在将消息发送到可能导致其他数据访问请求失败的处理器核心之前,通过验证数据访问请求仍然有效,可以在多个核心处理器中防止活动锁。 高速缓存一致性管理器从多个处理器核心接收数据访问请求。 在接收到可能导致活动锁定的数据访问请求时,高速缓存一致性管理器首先向请求处理器核心发送干预消息以确认该数据访问请求将成功。 如果请求处理器核心确定数据访问请求仍然有效,则它指示高速缓存一致性管理器继续进行数据访问请求。 然后,高速缓存一致性管理器可以向其他处理器核心发送干预消息以完成数据访问请求。 如果请求处理器核心确定数据访问请求无效,则它指示高速缓存一致性管理器放弃数据访问请求。
    • 28. 发明申请
    • Smart memory based synchronization controller for a multi-threaded multiprocessor SoC
    • 用于多线程多处理器SoC的基于智能内存的同步控制器
    • US20050251639A1
    • 2005-11-10
    • US10955231
    • 2004-09-30
    • Sanjay VishinKevin KissellDarren JonesRyan Kinter
    • Sanjay VishinKevin KissellDarren JonesRyan Kinter
    • G06F12/00G06F12/14G06F13/16
    • G06F12/1416G06F13/1642
    • A memory interface for use with a multiprocess memory system having a gating memory, the gating memory associating one or more memory access methods with each of a plurality of memory locations of the memory system wherein the gating memory returns a particular one access method for a particular one memory location responsive to a memory access instruction relating to the particular one memory location, the interface including: a request storage for storing a plurality of concurrent memory access instructions for one or more of the particular memory locations, each the memory access instruction issued from an associated independent thread context; an arbiter, coupled to the request storage, for selecting a particular one of the memory access instructions to apply to the gating memory; and a controller, coupled to the request storage and to the arbiter, for: storing the plurality of memory access instructions in the request storage; initiating application of the particular one memory access instruction selected by the arbiter to the gating memory; receiving the particular one access method associated with the particular one memory access method from the gating memory; and initiating a communication of the particular access method to the thread context associated with the particular one access instruction.
    • 一种与具有门控存储器的多处理存储器系统一起使用的存储器接口,门控存储器将一个或多个存储器访问方法与存储器系统的多个存储器位置中的每一个相关联,其中门控存储器返回特定的一个存取方法 一个存储器位置响应于与特定一个存储器位置相关的存储器访问指令,该接口包括:存储用于特定存储器位置中的一个或多个的多个并发存储器访问指令的请求存储器,每个存储器访问指令从 相关联的独立线程上下文; 耦合到所述请求存储器的仲裁器,用于选择所述存储器访问指令中的特定一个以应用于所述门控存储器; 以及耦合到请求存储器和仲裁器的控制器,用于:将多个存储器访问指令存储在请求存储器中; 启动由仲裁器选择的特定一个存储器访问指令到门控存储器的应用; 从门控存储器接收与特定一个存储器访问方法相关联的特定一个访问方法; 以及发起特定访问方法与特定一个访问指令相关联的线程上下文的通信。
    • 29. 发明授权
    • Generation isolation system and method for garbage collection
    • 垃圾收集隔离系统和方法
    • US6098089A
    • 2000-08-01
    • US841543
    • 1997-04-23
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • G06F12/00G06F12/02G06F17/30
    • G06F12/0276Y10S707/99953Y10S707/99957
    • Architectural support for generation isolation is provided through trapping of intergenerational pointer stores. Identification of pointer stores as intergenerational is performed by a store barrier responsive to an intergenerational pointer store trap matrix that is programmably encoded with store target object and store pointer data generation pairs to be trapped. The write barrier and intergenerational pointer store trap matrix provide a programmably-flexible definition of generation pairs to be trapped, affording a garbage collector implementer with support for a wide variety of generational garbage collection methods, including remembered set-based methods, card-marking type methods, write barrier based copying collector methods, etc., as well as combinations thereof and combinations including train algorithm type methods to managing mature portions of a generationally collected memory space. Pointer specific store instruction replacement allows implementations in accordance with this invention to provide an exact barrier to not only pointer stores, but to the specific intergenerational pointer stores of interest to a particular garbage collection method or combination of methods.
    • 通过捕获代际指针存储来提供对代隔离的建筑支持。 响应于代码指针存储陷阱矩阵的存储屏障来执行指针存储作为代际的识别,该存储屏障可编程地用存储目标对象编码并存储要被捕获的指针数据生成对。 写屏障和代际指针存储陷阱矩阵提供了可编程灵活的定义要被捕获的生成对,提供了一个垃圾回收器实现者,支持各种各样的代代垃圾收集方法,包括记住基于集合的方法,卡标记类型 方法,基于写屏障的复制收集器方法等,以及其组合以及包括训练算法类型方法组合以管理代收收集的存储器空间的成熟部分的组合。 指针特定存储指令替换允许根据本发明的实现不仅提供指针存储器的确切障碍,而且提供对特定垃圾回收方法或方法组合感兴趣的特定代际指针存储器。
    • 30. 发明申请
    • SYSTEM AND METHOD FOR DYNAMIC VOLTAGE SCALING IN A GPS RECEIVER
    • GPS接收机动态电压调节系统及方法
    • US20100283680A1
    • 2010-11-11
    • US12435591
    • 2009-05-05
    • Sanjay VishinSteve Gronemeyer
    • Sanjay VishinSteve Gronemeyer
    • G01S19/24G01S19/37
    • G01S19/34G01S19/24
    • Systems and methods are disclosed herein to dynamically vary supply voltages and clock frequencies, also known as dynamic voltage scaling (DVS), in GPS receivers to minimize receiver power consumption while meeting performance requirements. For the baseband circuitry performing satellite acquisition and tracking, supply voltages and clock frequencies to the baseband circuitry are dynamically adjusted as a function of signal processing requirements and operating conditions for reducing baseband power consumption. Similarly, the supply voltage and clock frequency to the processor running navigation software and event processing are dynamically adjusted as a function of navigation performance requirements and event occurrences to reduce processor power consumption.
    • 本文公开的系统和方法在GPS接收机中动态地改变电源电压和时钟频率(也称为动态电压缩放(DVS)),以在满足性能要求的同时最小化接收机功耗。 对于执行卫星采集和跟踪的基带电路,基带电路的电源电压和时钟频率作为信号处理要求和降低基带功耗的操作条件的函数进行动态调整。 类似地,运行导航软件和事件处理的处理器的电源电压和时钟频率根据导航性能要求和事件发生而被动态地调整,以减少处理器功耗。