会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • System and method for predicting cache performance
    • 用于预测缓存性能的系统和方法
    • US06952664B1
    • 2005-10-04
    • US09834342
    • 2001-04-13
    • Tirthankar LahiriJuan R. LoaizaArvind NithrakashyapWilliam H. Bridge
    • Tirthankar LahiriJuan R. LoaizaArvind NithrakashyapWilliam H. Bridge
    • G06F17/50G06G17/50
    • G06F17/5022
    • A system and methods for simulating the performance (e.g., miss rate) of one or more caches. A cache simulator comprises a segmented list of buffers, with each buffer configured to store a data identifier and an identifier of the buffer's segment. Data references, which may be copied from an operational cache, are applied to the list to conduct the simulation. Initial estimates of each cache's miss rate include the number of references that missed all segments of the list plus the hits in all segments not part of the cache. A correction factor is generated from the ratio of actual misses incurred by the operational cache to the estimated misses for a simulated cache of the same size as the operational cache. Final predictions are generated by multiplying the initial estimates by the correction factor. The size of the operational cache may be dynamically adjusted based on the final predictions.
    • 用于模拟一个或多个高速缓存的性能(例如,错过率)的系统和方法。 缓存模拟器包括分段缓冲器列表,其中每个缓冲器被配置为存储数据标识符和缓冲器段的标识符。 可以从操作缓存复制的数据引用被应用于列表以进行模拟。 每个缓存的未命中率的初始估计包括丢失列表的所有段的引用数量加上不是高速缓存的一部分的所有段中的命中。 由操作缓存引起的实际未命中率与与操作缓存大小相同的模拟高速缓存的估计未命中的比率产生校正因子。 最终预测是通过将初始估计乘以校正因子而产生的。 可以基于最终预测来动态地调整操作高速缓存的大小。
    • 6. 发明授权
    • Method and mechanism for efficient implementation of ordered records
    • 有效执行有序记录的方法和机制
    • US07039773B2
    • 2006-05-02
    • US10426471
    • 2003-04-29
    • Wei Ming HuJuan R. LoaizaRoger J. BamfordVikram JoshiArvind NithrakashyapTudor BosmanVinay SrihariAlok Pareek
    • Wei Ming HuJuan R. LoaizaRoger J. BamfordVikram JoshiArvind NithrakashyapTudor BosmanVinay SrihariAlok Pareek
    • G06F12/00
    • G06F17/30595Y10S707/99953
    • An improved method, mechanism, and system for implementing, generating, and maintaining records, such as redo records and redo logs in a database system, are disclosed. Multiple sets of records may be created and combined into a partially ordered (or non-ordered) group of records, which are later collectively ordered or sorted as needed to create an fully ordered set of records. With respect to a database system, redo generation bottleneck is minimized by providing multiple in-memory redo buffers that are available to hold redo records generated by multiple threads of execution. When the in-memory redo buffers are written to a persistent storage medium, no specific ordering needs to be specified with respect to the redo records from the different in-memory redo buffers. While the collective group of records may not be ordered, the written-out redo records may be partially ordered based upon the ordered redo records from within individual in-memory redo buffers. At recovery, ordering and/or merging of redo records may occur to satisfy database consistency requirements.
    • 公开了一种用于在数据库系统中实现,生成和维护记录(如重做记录和重做日志)的改进方法,机制和系统。 可以创建多组记录,并将其组合成部分有序(或非有序)的记录组,这些记录集合随后根据需要进行统一排序或排序以创建完全有序的记录集。 对于数据库系统,通过提供多个可用于保存由多个执行线程生成的重做记录的内存中重做缓冲区,可以使重做生成瓶颈最小化。 当内存中重做缓冲区被写入持久存储介质时,不需要针对来自不同内存中重做缓冲区的重做记录指定特定顺序。 虽然可能不会对集体记录进行排序,但是写出的重做记录可能会根据内部存储器重做缓冲区内的重做记录进行部分排序。 在恢复时,可能会发生重做记录的排序和/或合并,以满足数据库一致性要求。
    • 7. 发明申请
    • Maintaining item-to-node mapping information in a distributed system
    • 在分布式系统中维护项目到节点映射信息
    • US20080177741A1
    • 2008-07-24
    • US11657778
    • 2007-01-24
    • Vikram JoshiAlexander TsukermanArvind NithrakashyapJia ShiTudor Bosman
    • Vikram JoshiAlexander TsukermanArvind NithrakashyapJia ShiTudor Bosman
    • G06F17/30
    • G06F17/30557
    • A method and apparatus for maintaining an item-to-node mapping among nodes in a distributed cluster is provided. Each node maintains locally-stored system-state information indicating that node's understanding of which master nodes are alive and dead. Instead of employing a global item-to-node mapping, each node acts upon a locally determined mapping based on its locally-stored system-state information. For any two nodes with the same locally-stored system-state information, the locally determined mapping is the same. A node updates its locally-stored system-state information upon detecting a node failure or receiving a message from another node indicating different locally-stored system-state information. The new locally-stored system-state information is transmitted on a need-to-know basis, and consequently nodes with different item-to-node mappings may operate concurrently. Mechanisms to avoid nodes assuming conflicting ownership of items are employed, thus allowing node failures to propagate via asynchronous messaging instead of requiring a cluster-wide synchronization event.
    • 提供了一种用于在分布式集群中的节点之间维护项目到节点映射的方法和装置。 每个节点维护本地存储的系统状态信息,指示节点了解哪些主节点存活和死亡。 代替采用全局项目到节点映射,每个节点基于其本地存储的系统状态信息在本地确定的映射上作用。 对于具有相同本地存储的系统状态信息的任何两个节点,本地确定的映射是相同的。 当节点检测到节点故障或从另一节点接收到指示不同的本地存储的系统状态信息的消息时,节点更新其本地存储的系统状态信息。 新的本地存储的系统状态信息是在需要知道的基础上传输的,因此具有不同项目到节点映射的节点可以同时运行。 采用避免节点假设项目所有权冲突的机制,从而允许节点故障通过异步消息传播,而不需要群集范围的同步事件。
    • 9. 发明授权
    • System load based adaptive prefetch
    • 基于系统负载的自适应预取
    • US07359890B1
    • 2008-04-15
    • US10142257
    • 2002-05-08
    • Chi KuArvind NithrakashyapAri W. Mozes
    • Chi KuArvind NithrakashyapAri W. Mozes
    • G06F17/30
    • G06F17/3048Y10S707/99932
    • A number, of the blocks of data to be prefetched into a buffer cache, is determined dynamically at run time (e.g. during execution of a query), based at least in part on the load placed on the buffer cache. An application program (such as a database) is responsive to the number (also called “prefetch size”), to determine the amount of prefetching. A sequence of instructions (also called “prefetch size daemon”) computes the prefetch size based on, for example, the number of prefetched blocks aged out before use. The prefetch size daemon dynamically revises the prefetch size based on usage of the buffer cache, thereby to form a feedback loop. Depending on the embodiment, at times of excessive use of the buffer cache, prefetching may even be turned off. Although in one embodiment described herein the prefetch size daemon is implemented in a database, in other embodiments other kinds of applications and/or the operating system itself can use a prefetch size daemon of the type described herein to dynamically determine and change prefetch behavior.
    • 至少部分地基于放置在缓冲器高速缓存上的负载,在运行时(例如在执行查询期间)动态地确定要预取到缓冲器高速缓存中的数据块的数量。 应用程序(例如数据库)响应于数字(也称为“预取大小”)来确定预取量。 指令序列(也称为“预取大小守护程序”)基于例如在使用之前老化的预取块的数量来计算预取大小。 预取大小守护程序根据缓冲区高速缓存的使用动态地修改预取大小,从而形成一个反馈循环。 根据实施例,在过度使用缓冲器高速缓存的时候,预取甚至可能被关闭。 尽管在本文描述的一个实施例中,预取大小守护程序在数据库中实现,但在其他实施例中,其他种类的应用和/或操作系统本身可以使用本文所述类型的预取大小守护程序动态地确定并改变预取行为。