会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Producer-consumer data transfer using piecewise circular queue
    • 使用分段循环队列的生产者 - 消费者数据传输
    • US08806168B2
    • 2014-08-12
    • US13230833
    • 2011-09-12
    • Igor OstrovskyStephen H. Toub
    • Igor OstrovskyStephen H. Toub
    • G06F12/02
    • G06F12/02G06F9/544G06F2209/548
    • A method includes producing values with a producer thread, and providing a queue data structure including a first array of storage locations for storing the values. The first array has a first tail pointer and a first linking pointer. If a number of values stored in the first array is less than a capacity of the first array, an enqueue operation writes a new value at a storage location pointed to by the first tail pointer and advances the first tail pointer. If the number of values stored in the first array is equal to the capacity of the first array, a second array of storage locations is allocated in the queue. The second array has a second tail pointer. The first array is linked to the second array with the first linking pointer. An enqueue operation writes the new value at a storage location pointed to by the second tail pointer and advances the second tail pointer.
    • 一种方法包括使用生产者线程生成值,并提供包括用于存储值的存储位置的第一阵列的队列数据结构。 第一个数组有一个第一个尾部指针和一个第一个链接指针。 如果存储在第一阵列中的多个值小于第一阵列的容量,则入队操作将新值写入由第一尾指针指向的存储位置并且前进第一尾指针。 如果存储在第一个阵列中的值的数量等于第一个阵列的容量,则在队列中分配第二个存储位置阵列。 第二个数组有一个第二个尾部指针。 第一个数组用第一个链接指针链接到第二个数组。 排队操作将新值写入由第二个尾部指针指向的存储位置,并推进第二个尾部指针。
    • 22. 发明授权
    • Dynamic partitioning of data by occasionally doubling data chunk size for data-parallel applications
    • 通过偶尔将数据块大小加倍的数据并行应用来动态划分数据
    • US08707320B2
    • 2014-04-22
    • US12712986
    • 2010-02-25
    • Michael LiddellIgor OstrovskyStephen Toub
    • Michael LiddellIgor OstrovskyStephen Toub
    • G06F9/46
    • G06F9/505
    • Dynamic data partitioning is disclosed for use with a multiple node processing system that consumes items from a data stream of any length and independent of whether the length is undeclared. Dynamic data partitioning takes items from the data stream when a thread is idle and assigns the taken items to an idle thread, and it varies the size of data chunks taken from the stream and assigned to a thread to efficiently distribute work loads among the nodes. In one example, data chunk sizes taken from the beginning of the data stream are relatively smaller than data chunk sizes taken towards the middle or end of the data stream. Dynamic data partitioning employs a growth function where chunks have a size related to single aligned cache lines and efficiently increases the size of the data chunks to occasionally double the amount of data assigned to concurrent threads.
    • 公开了与多节点处理系统一起使用的动态数据分区,其从任何长度的数据流消耗项目,并且与长度是否未声明无关。 动态数据分区在线程空闲时从数据流中获取项目,并将所获取的项目分配给空闲线程,并且将从流中分配给分配给线程的数据块的大小变化,从而有效地在节点之间分配工作负载。 在一个示例中,从数据流的开头获取的数据块大小相对于数据流的中间或结尾所采取的数据块大小相对较小。 动态数据分区使用增长函数,其中块与单个对齐的高速缓存行相关联,并有效地增加数据块的大小,以偶尔将分配给并发线程的数据量加倍。
    • 23. 发明授权
    • Propagating unobserved exceptions in distributed execution environments
    • 在分布式执行环境中传播不可观察的异常
    • US08631279B2
    • 2014-01-14
    • US13155263
    • 2011-06-07
    • Huseyin Serkan YildizMassimo MascaroJoseph E. HoagIgor Ostrovsky
    • Huseyin Serkan YildizMassimo MascaroJoseph E. HoagIgor Ostrovsky
    • G06F11/07
    • G06F11/0784
    • The present invention extends to methods, systems, and computer program products for propagating unhandled exceptions in distributed execution environments, such as clusters. A job (e.g., a query) can include a series of computation steps that are executed on multiple compute nodes each processing parts of a distributed data set. Unhandled exceptions can be caught while computations are running on data partitions of different compute nodes. Unhandled exception objects can be stored in a serialized format in a compute node's local storage (or an alternate central location) along with auxiliary details such as the data partition being processed at the time. Stored serialized exception objects for a job can be harvested and aggregated in a single container object. The single container object can be passed back to the client.
    • 本发明扩展到方法,系统和计算机程序产品,用于在分布式执行环境(如群集)中传播未处理的异常。 作业(例如,查询)可以包括在多个计算节点上执行的一系列计算步骤,每个处理分布式数据集的各个部分。 在不同计算节点的数据分区上运行计算时可以捕获未处理的异常。 未处理的异常对象可以以计算节点的本地存储(或备用中心位置)中的序列化格式以及当时正在处理的数据分区等辅助细节存储。 存储作业的序列化异常对象可以在单个容器对象中进行收集和聚合。 单个容器对象可以传回客户端。
    • 26. 发明申请
    • PARALLEL QUERY ENGINE WITH DYNAMIC NUMBER OF WORKERS
    • 平行查询引擎与动员数量的工人
    • US20110185358A1
    • 2011-07-28
    • US12695049
    • 2010-01-27
    • Igor OstrovskyJohn J. DuffyStephen Harris Toub
    • Igor OstrovskyJohn J. DuffyStephen Harris Toub
    • G06F9/46
    • G06F9/4881G06F2209/483
    • Partitioning query execution work of a sequence including a plurality of elements. A method includes a worker core requesting work from a work queue. In response, the worker core receives a task from the work queue. The task is a replicable sequence-processing task including two distinct steps: scheduling a copy of the task on the scheduler queue and processing a sequence. The worker core processes the task by: creating a replica of the task and placing the replica of the task on the work queue, and beginning processing the sequence. The acts are repeated for one or more additional worker cores, where receiving a task from the work queue is performed by receiving one or more replicas of tasks placed on the task queue by earlier performances of creating a replica of the task and placing the replica of the task on the work queue by a different worker core.
    • 分割包含多个元素的序列的查询执行工作。 一种方法包括从工作队列请求工作的工作者核心。 作为响应,工作核心从工作队列接收任务。 该任务是可重复的序列处理任务,其包括两个不同的步骤:在调度器队列上调度任务的副本并处理序列。 工作核心通过以下方式来处理任务:创建任务的副本并将任务的副本放在工作队列上,并开始处理序列。 对于一个或多个额外的工作核心重复执行动作,其中从工作队列接收任务通过接收一个或多个放置在任务队列上的任务的副本来执行,以通过较早的创建该任务副本的性能并将 工作队列中的任务由不同的工作人员核心。
    • 27. 发明申请
    • GROUPING MECHANISM FOR MULTIPLE PROCESSOR CORE EXECUTION
    • 多处理器核心执行分组机制
    • US20110125805A1
    • 2011-05-26
    • US12625379
    • 2009-11-24
    • Igor Ostrovsky
    • Igor Ostrovsky
    • G06F17/30
    • G06F9/5038G06F9/5066
    • A concurrent grouping operation for execution on a multiple core processor is provided. The grouping operation is provided with a sequence or set of elements. In one phase, each worker receives a partition of a sequence of elements to be grouped. The elements of each partition are arranged into a data structure, which includes one or more keys where each key corresponds to a value list of one or more of the received elements associated with that key. In another phase, the data structures created by each worker are merged so that the keys and corresponding elements for the entire sequence of elements exist in one data structure. Recursive merging can be completed in a constant time, which is not proportional to the length of the sequence.
    • 提供了用于在多核处理器上执行的并发分组操作。 分组操作具有序列或一组元素。 在一个阶段,每个工作人员接收要分组的元素序列的分区。 每个分区的元素被布置成数据结构,其包括一个或多个键,其中每个键对应于与该键相关联的一个或多个接收到的元素的值列表。 在另一个阶段,由每个工作者创建的数据结构被合并,使得整个元素序列的键和对应的元素存在于一个数据结构中。 递归合并可以在恒定时间内完成,这与序列的长度不成比例。
    • 29. 发明申请
    • AUTOMATIC OPTIMIZATION FOR PROGRAMMING OF MANY-CORE ARCHITECTURES
    • 用于编程多个核心架构的自动优化
    • US20130132684A1
    • 2013-05-23
    • US13300464
    • 2011-11-18
    • Igor OstrovskyZachary David Johnson
    • Igor OstrovskyZachary David Johnson
    • G06F12/00G06F12/16
    • G06F11/3471G06F11/3409
    • The present invention extends to methods, systems, and computer program products for automatically optimizing memory accesses by kernel functions executing on parallel accelerator processors. A function is accessed. The function is configured to operate over a multi-dimensional matrix of memory cells through invocation as a plurality of threads on a parallel accelerator processor. A layout of the memory cells of the multi-dimensional matrix and a mapping of memory cells to global memory at the parallel accelerator processor are identified. The function is analyzed to identify how each of the threads access the global memory to operate on corresponding memory cells when invoked from the kernel function. Based on the analysis, the function altered to utilize a more efficient memory access scheme when performing accesses to the global memory. The more efficient memory access scheme increases coalesced memory access by the threads when invoked over the multi-dimensional matrix.
    • 本发明扩展到用于通过在并行加速器处理器上执行的内核功能自动优化存储器访问的方法,系统和计算机程序产品。 访问一个功能。 该功能被配置为通过调用作为并行加速器处理器上的多个线程在多维矩阵的存储器单元上进行操作。 识别多维矩阵的存储单元的布局以及在并行加速器处理器处的存储单元与全局存储器的映射。 分析该功能以识别当从内核功能调用时,每个线程如何访问全局存储器以对相应的存储器单元进行操作。 基于分析,当执行对全局存储器的访问时,功能被改变以利用更有效的存储器访问方案。 当通过多维矩阵调用时,更有效的存储器访问方案增加了线程的合并的存储器访问。
    • 30. 发明授权
    • Parallel query engine with dynamic number of workers
    • 具有动态数量的工作人员的并行查询引擎
    • US08392920B2
    • 2013-03-05
    • US12695049
    • 2010-01-27
    • Igor OstrovskyJohn J. DuffyStephen Harris Toub
    • Igor OstrovskyJohn J. DuffyStephen Harris Toub
    • G06F9/40G06F9/44G06F9/46
    • G06F9/4881G06F2209/483
    • Partitioning query execution work of a sequence including a plurality of elements. A method includes a worker core requesting work from a work queue. In response, the worker core receives a task from the work queue. The task is a replicable sequence-processing task including two distinct steps: scheduling a copy of the task on the scheduler queue and processing a sequence. The worker core processes the task by: creating a replica of the task and placing the replica of the task on the work queue, and beginning processing the sequence. The acts are repeated for one or more additional worker cores, where receiving a task from the work queue is performed by receiving one or more replicas of tasks placed on the task queue by earlier performances of creating a replica of the task and placing the replica of the task on the work queue by a different worker core.
    • 分割包含多个元素的序列的查询执行工作。 一种方法包括从工作队列请求工作的工作者核心。 作为响应,工作核心从工作队列接收任务。 该任务是可重复的序列处理任务,其包括两个不同的步骤:在调度器队列上调度任务的副本并处理序列。 工作核心通过以下方式来处理任务:创建任务的副本并将任务的副本放在工作队列上,并开始处理序列。 对于一个或多个额外的工作核心重复执行动作,其中从工作队列接收任务通过接收一个或多个放置在任务队列上的任务的副本来执行,以通过较早的创建该任务副本的性能并将副本 工作队列中的任务由不同的工作人员核心。