会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Digital processor for processing long and short pointers and converting each between a common format
    • 用于处理长和短指针的数字处理器,并在通用格式之间进行转换
    • US08656139B2
    • 2014-02-18
    • US13045919
    • 2011-03-11
    • Stephan MeierJohn G. FavorEvan GewirtzRobert HathawayEric Trehus
    • Stephan MeierJohn G. FavorEvan GewirtzRobert HathawayEric Trehus
    • G06F12/00
    • G06F9/30043G06F9/342
    • A digital processor stores pointers of different sizes in memory. The processor, specifically, executes instructions to store a long or short pointer. Long pointers reference any address in the memory's logical address space, while short pointers merely reference any address in a subset of that space. However, short pointers are smaller in size as stored in memory than long pointers. Long pointers thus support relatively large address range capabilities, while short pointers use less memory. The processor also executes instructions to load a long or short pointer into the register file, and does so in a way that does not require the processor to distinguish between the different pointers when executing other instructions. Specifically, the processor converts long and short pointers into a common format for loading into the register file, and converts pointers in the common format back into long or short pointers for storing in the memory.
    • 数字处理器将不同大小的指针存储在存储器中。 处理器具体地执行指令来存储长或短指针。 长指针引用存储器逻辑地址空间中的任何地址,而短指针仅引用该空间子集中的任何地址。 然而,短指针的大小比存储在内存中的长度小于长指针。 因此,长指针支持相对较大的地址范围功能,而短指针使用较少的内存。 处理器还执行将长指针或短指针加载到寄存器文件中的指令,并以不需要处理器在执行其他指令时区分不同指针的方式执行指令。 具体来说,处理器将长指针和短指针转换为用于加载到寄存器文件中的通用格式,并将通用格式的指针转换为长或短指针以存储在存储器中。
    • 22. 发明申请
    • SHORT POINTERS
    • 短指针
    • US20120233414A1
    • 2012-09-13
    • US13045919
    • 2011-03-11
    • Stephan MeierJohn G. FavorEvan GewirtzRobert HathawayEric Trehus
    • Stephan MeierJohn G. FavorEvan GewirtzRobert HathawayEric Trehus
    • G06F12/00
    • G06F9/30043G06F9/342
    • A digital processor stores pointers of different sizes in memory. The processor, specifically, executes instructions to store a long or short pointer. Long pointers reference any address in the memory's logical address space, while short pointers merely reference any address in a subset of that space. However, short pointers are smaller in size as stored in memory than long pointers. Long pointers thus support relatively large address range capabilities, while short pointers use less memory. The processor also executes instructions to load a long or short pointer into the register file, and does so in a way that does not require the processor to distinguish between the different pointers when executing other instructions. Specifically, the processor converts long and short pointers into a common format for loading into the register file, and converts pointers in the common format back into long or short pointers for storing in the memory.
    • 数字处理器将不同大小的指针存储在存储器中。 处理器具体地执行指令来存储长或短指针。 长指针引用存储器逻辑地址空间中的任何地址,而短指针仅引用该空间子集中的任何地址。 然而,短指针的大小比存储在内存中的长度小于长指针。 因此,长指针支持相对较大的地址范围功能,而短指针使用较少的内存。 处理器还执行将长指针或短指针加载到寄存器文件中的指令,并以不需要处理器在执行其他指令时区分不同指针的方式执行指令。 具体来说,处理器将长指针和短指针转换为用于加载到寄存器文件中的通用格式,并将通用格式的指针转换为长或短指针以存储在存储器中。
    • 23. 发明申请
    • Explicitly Regioned Memory Organization in a Network Element
    • 网络元素中明确区域内存组织
    • US20120173841A1
    • 2012-07-05
    • US12983130
    • 2010-12-31
    • Stephan MeierRobert HathawayEvan GewirtzBrian AlleyneEdward Ho
    • Stephan MeierRobert HathawayEvan GewirtzBrian AlleyneEdward Ho
    • G06F12/10
    • G06F12/1009G06F2213/0038Y02D10/13
    • A network element that includes multiple memory types and memory sizes translates a logical memory address into a physical memory address. A memory access request is received for a data structure with a logical memory address that includes a region identifier that identifies a region that is mapped to one or more memories and is associated with a set of one or more region attributes whose values are based on processing requirements provided by a software programmer and the available memories of the network element. The network element accesses the region mapping table entry corresponding to the region identifier and, using the region attributes that are associated with the region, determines an access target for the request, determines a physical memory address offset within the access target, and generates a physical memory address. The access target includes a target class of memory, an instance within the class of memory, and a particular physical address space of the instance within the class of memory. The physical memory address includes a network routing information portion that includes information to route the physical memory address to the target instance, and includes an address payload portion that includes information to identify the physical address space identified by the subtarget and the physical memory address offset.
    • 包含多种存储器类型和存储器大小的网络元件将逻辑存储器地址转换为物理存储器地址。 接收到具有逻辑存储器地址的数据结构的存储器访问请求,逻辑存储器地址包括标识映射到一个或多个存储器的区域的区域标识符,并且与其值基于处理的一个或多个区域属性的集合相关联 由软件程序员提供的要求和网元的可用存储器。 网元访问对应于区域标识符的区域映射表条目,并且使用与该区域相关联的区域属性来确定该请求的访问目标,确定访问目标内的物理内存地址偏移量,并且生成物理 内存地址。 访问目标包括目标类别的存储器,存储器类内的实例以及存储器类内的实例的特定物理地址空间。 物理存储器地址包括网络路由信息部分,其包括用于将物理存储器地址路由到目标实例的信息,并且包括地址有效载荷部分,其包括用于识别由子目标识别的物理地址空间的信息和物理存储器地址偏移。
    • 24. 发明申请
    • METHOD AND APPARATUS FOR ACCESSING CACHE MEMORY
    • 用于访问高速缓存存储器的方法和设备
    • US20110289257A1
    • 2011-11-24
    • US12784276
    • 2010-05-20
    • Robert HathawayEvan Gewirtz
    • Robert HathawayEvan Gewirtz
    • G06F12/08G06F12/00
    • G06F12/0888G06F12/12
    • A request for reading data from a memory location of a main memory is received, the memory location being identified by a physical memory address. In response to the request, a cache memory is accessed based on the physical memory address to determine whether the cache memory contains the data being requested. The data associated with the request is returned from the cache memory without accessing the memory location if there is a cache hit. The data associated is returned from the main memory if there is a cache miss. In response to the cache miss, it is determined whether there have been a number of accesses within a predetermined period of time. A cache entry is allocated from the cache memory to cache the data if there have been a predetermined number of accesses within the predetermined period of time.
    • 接收到从主存储器的存储器位置读取数据的请求,存储器位置由物理存储器地址标识。 响应于该请求,基于物理存储器地址访问高速缓存存储器,以确定高速缓冲存储器是否包含正被请求的数据。 如果存在高速缓存命中,与请求相关联的数据从高速缓冲存储器返回,而不访问存储器位置。 如果存在高速缓存未命中,则从主存储器返回相关的数据。 响应于高速缓存未命中,确定在预定时间段内是否存在多个访问。 如果在预定时间段内已经存在预定数量的访问,则从高速缓冲存储器分配高速缓存条目以缓存数据。
    • 25. 发明申请
    • Distributed Cache Coherence at Scalable Requestor Filter Pipes that Accumulate Invalidation Acknowledgements from other Requestor Filter Pipes Using Ordering Messages from Central Snoop Tag
    • 可扩展请求者的分布式缓存一致性累积无效的过滤器来自其他请求者过滤器管道的致谢使用来自中央监听标签的订购消息
    • US20070186054A1
    • 2007-08-09
    • US11307413
    • 2006-02-06
    • David KruckemyerKevin NormoyleRobert Hathaway
    • David KruckemyerKevin NormoyleRobert Hathaway
    • G06F13/28
    • G06F12/082G06F12/0828
    • A multi-processor, multi-cache system has filter pipes that store entries for request messages sent to a central coherency controller. The central coherency controller orders requests from filter pipes using coherency rules but does not track completion of invalidations. The central coherency controller reads snoop tags to identify sharing caches having a copy of a requested cache line. The central coherency controller sends an ordering message to the requesting filter pipe. The ordering message has an invalidate count indicating the number of sharing caches. Each sharing cache receives an invalidation message from the central coherency controller, invalidates its copy of the cache line, and sends an invalidation acknowledgement message to the requesting filter pipe. The requesting filter pipe decrements the invalidate count until all sharing caches have acknowledged invalidation. All ordering, data, and invalidation acknowledgement messages must be received by the requesting filter pipe before loading the data into its cache.
    • 多处理器,多缓存系统具有过滤器管道,其存储发送到中央一致性控制器的请求消息的条目。 中央一致性控制器使用一致性规则对来自过滤器管道的请求进行排序,但不跟踪完成无效。 中央一致性控制器读取窥探标签以识别具有所请求的高速缓存行的副本的共享高速缓存。 中央一致性控制器向请求过滤管发送排序消息。 排序消息具有指示共享缓存数量的无效计数。 每个共享缓存从中央一致性控制器接收到无效消息,使其高速缓存行的副本无效,并向请求的过滤器管道发送无效确认消息。 请求过滤管道减少无效计数,直到所有共享缓存都确认无效。 在将数据加载到其缓存中之前,请求过滤器管道必须接收所有排序,数据和无效确认消息。
    • 27. 发明授权
    • Transferring data between cache memory and a media access controller
    • 在缓存和介质访问控制器之间传输数据
    • US06920529B2
    • 2005-07-19
    • US10105857
    • 2002-03-25
    • Fred GrunerRobert HathawayRicardo Ramirez
    • Fred GrunerRobert HathawayRicardo Ramirez
    • G06F12/00G06F12/08H04L12/56G06F13/00
    • H04L47/10G06F12/0813G06F12/0831G06F12/084H04L47/12H04L47/20H04L47/2441H04L47/765H04L47/805H04L47/822
    • A coprocessor transfers data between media access controllers and a set of cache memory without accessing main memory. The coprocessor includes a reception media access controller that receives data from a network and a transmission media access controller that transmits data to a network. A streaming output data transfer engine in the coprocessor transfers data from the reception media access controller to cache memory. A streaming input data transfer engine in the coprocessor transfers data from cache memory to the transmission media access controller. The coprocessor's data transfer engines transfer data between cache memory and the media access controllers in a single data transfer operation—eliminating the need to store data in an intermediary memory location between the cache memory and data transfer engines. In one implementation, the coprocessor is employed in a compute engine that performs different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
    • 协处理器在介质访问控制器和一组高速缓冲存储器之间传输数据,而不访问主存储器。 协处理器包括从网络接收数据的接收媒体接入控制器和向网络发送数据的传输媒体接入控制器。 协处理器中的流输出数据传输引擎将数据从接收媒体访问控制器传送到高速缓冲存储器。 协处理器中的流式输入数据传输引擎将数据从高速缓冲存储器传送到传输介质访问控制器。 协处理器的数据传输引擎在单个数据传输操作中在高速缓冲存储器和媒体访问控制器之间传输数据,从而无需将数据存储在高速缓冲存储器和数据传输引擎之间的中间存储器位置。 在一个实现中,协处理器被用在执行不同网络服务的计算引擎中,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。
    • 28. 发明授权
    • Compute engine employing a coprocessor
    • 采用协处理器的计算引擎
    • US06901488B2
    • 2005-05-31
    • US10105587
    • 2002-03-25
    • Fred GrunerRobert HathawayRicardo Ramirez
    • Fred GrunerRobert HathawayRicardo Ramirez
    • G06F12/00G06F12/08H04L12/56G06F13/00
    • H04L47/10G06F12/0813G06F12/0831G06F12/084H04L47/12H04L47/20H04L47/2441H04L47/765H04L47/805H04L47/822
    • A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a sequencer coupled to a set of application engines for performing operations assigned to the compute engine. The sequencer is coupled to application engines through a set of data, enable, and control interfaces. An arbiter couples the sequencer and application engines to memory. Alternatively, the coprocessor may include multiple sequencers, with each sequencer being coupled to a different set of application engines. One set of application engines includes a media access controller for communicating with a network and a data transfer engine coupling the media access controller to the arbiter. In one implementation, application engines facilitate different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
    • 计算引擎包括耦合到协处理器的中央处理单元。 协处理器包括耦合到一组应用引擎的定序器,用于执行分配给计算引擎的操作。 定序器通过一组数据,启用和控制接口耦合到应用程序引擎。 仲裁者将定序器和应用引擎耦合到内存中。 或者,协处理器可以包括多个定序器,每个定序器耦合到不同的应用引擎集合。 一组应用引擎包括用于与网络通信的媒体访问控制器和将媒体访问控制器耦合到仲裁器的数据传输引擎。 在一个实现中,应用引擎促进不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。
    • 29. 发明授权
    • Co-processor including a media access controller
    • 协处理器包括媒体访问控制器
    • US06898673B2
    • 2005-05-24
    • US10105973
    • 2002-03-25
    • Frederick GrunerRobert HathawayRamesh PanwarElango GanesanNazar Zaidi
    • Frederick GrunerRobert HathawayRamesh PanwarElango GanesanNazar Zaidi
    • G06F12/00G06F12/08H04L12/56G06F13/00
    • H04L47/10G06F12/0813G06F12/0831G06F12/084H04L47/12H04L47/20H04L47/2441H04L47/765H04L47/805H04L47/822
    • A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a media access controller engine and a data transfer engine. The media access controller engine couples the compute engine to a communications network. The data transfer engine couples the media access controller engine to a set of cache memory. In further embodiments, a compute engine includes two media access controller engines. A reception media access controller engine receives data from the communications network. A transmission media access controller engine transmits data to the communications network. The compute engine also includes two data transfer engines. A streaming output engine stores network data from the reception media access controller engine in cache memory. A streaming input engine transfers data from cache memory to the transmission media access controller engine. In one implementation, the compute engine performs different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
    • 计算引擎包括耦合到协处理器的中央处理单元。 协处理器包括媒体访问控制器引擎和数据传输引擎。 媒体访问控制器引擎将计算引擎耦合到通信网络。 数据传输引擎将媒体访问控制器引擎耦合到一组高速缓冲存储器。 在另外的实施例中,计算引擎包括两个媒体访问控制器引擎。 接收媒体接入控制器引擎从通信网络接收数据。 传输媒体接入控制器引擎向通信网络发送数据。 计算引擎还包括两个数据传输引擎。 流输出引擎将来自接收媒体访问控制器引擎的网络数据存储在高速缓冲存储器中。 流输入引擎将数据从高速缓冲存储器传输到传输介质访问控制器引擎。 在一个实现中,计算引擎执行不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。