会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 22. 发明申请
    • Pretranslating Input/Output Buffers In Environments With Multiple Page Sizes
    • 在多页大小的环境中预翻译输入/输出缓冲区
    • US20080263313A1
    • 2008-10-23
    • US12169826
    • 2008-07-09
    • David Alan Hepkin
    • David Alan Hepkin
    • G06F12/10
    • G06F12/1081G06F2212/652
    • Pretranslating input/output buffers in environments with multiple page sizes that include determining a pretranslation page size for an input/output buffer under an operating system that supports more than one memory page size, identifying pretranslation page frame numbers for the buffer in dependence upon the pretranslation page size, pretranslating the pretranslation page frame numbers to physical page numbers, and storing the physical page numbers in association with the pretranslation page size. Typical embodiments also include accessing the buffer, including translating a virtual memory address in the buffer to a physical memory address in dependence upon the physical page numbers and the pretranslation page size and accessing the physical memory of the buffer at the physical memory address.
    • 在具有多个页面大小的环境中预处理输入/输出缓冲区,包括在支持多个内存页面大小的操作系统下确定输入/输出缓冲区的预翻译页面大小,根据预翻译识别缓冲区的预翻译页面帧数 页面大小,将翻译前页面帧编号预翻译为物理页码,并将物理页面编号与预翻译页面大小相关联。 典型实施例还包括访问缓冲器,包括根据物理页码和预翻译页面大小以及在物理存储器地址处访问缓冲器的物理存储器,将缓冲器中的虚拟存储器地址转换为物理存储器地址。
    • 26. 发明授权
    • Selective memory donation in virtual real memory environment
    • 虚拟内存环境中的选择性内存捐赠
    • US08799892B2
    • 2014-08-05
    • US12135316
    • 2008-06-09
    • David Alan Hepkin
    • David Alan Hepkin
    • G06F9/455G06F9/46
    • G06F9/45558G06F12/0866G06F2009/45583
    • A method, system, and computer usable program product for selective memory donation in a virtual real memory environment are provided in the illustrative embodiments. A virtual machine receives a request for memory donation. A component of the virtual machine determines whether a portion of a memory space being used for file caching exceeds a threshold. The determining forms a threshold determination, and the portion of the memory space being used for file caching forms a file cache. If the threshold determination is false, the component ignores the request. If the threshold determination is true, a component of the virtual machine releases a part of the file cache that exceeds the threshold. The part of the file cache forms a released file cache. In response to the request, the virtual machine makes the released file cache available to a requester of the request.
    • 在说明性实施例中提供了用于虚拟实际存储器环境中的选择性存储器捐赠的方法,系统和计算机可用程序产品。 虚拟机接收到对内存捐赠的请求。 虚拟机的组件确定用于文件高速缓存的内存空间的一部分是否超过阈值。 该确定形成阈值确定,并且用于文件高速缓存的存储器空间的部分形成文件高速缓存。 如果阈值确定为假,组件将忽略该请求。 如果阈值确定为真,则虚拟机的组件释放超过阈值的文件高速缓存的一部分。 文件缓存的一部分形成已发布的文件缓存。 响应于该请求,虚拟机使释放的文件缓存可用于请求的请求者。
    • 28. 发明授权
    • Modeling memory compression
    • 建模内存压缩
    • US08364928B2
    • 2013-01-29
    • US13454323
    • 2012-04-24
    • Saravanan DevendraDavid Alan HepkinRajalakshmi SrinivasaRaghavan
    • Saravanan DevendraDavid Alan HepkinRajalakshmi SrinivasaRaghavan
    • G06F12/02
    • H03M7/3068G06F12/08G06F12/1009G06F2212/401
    • A method for modeling memory compression is provided in the illustrative embodiments. A subset of candidate pages is received. The subset of candidate pages is a subset of a set of candidate pages used in executing a workload in a data processing system. A candidate page is compressible uncompressed data in a memory associated with the data processing system. The subset of candidate pages is compressed in a scratch space. A compressibility of the workload is computed based on the compression of the subset of candidate pages. Page reference information of the subset of candidate pages is received. A memory reference rate of the workload is determined. A recommendation is presented about a memory compression model for the workload in the data processing system.
    • 在说明性实施例中提供了用于建模存储器压缩的方法。 接收候选页面的子集。 候选页面的子集是用于在数据处理系统中执行工作负载的一组候选页面的子集。 候选页面是与数据处理系统相关联的存储器中的可压缩的未压缩数据。 候选页面的子集在临时空间中被压缩。 基于候选页面子集的压缩来计算工作负载的可压缩性。 收到候选页子集的页面参考信息。 确定工作负载的内存参考速率。 介绍了关于数据处理系统中工作负载的内存压缩模型的建议。
    • 29. 发明申请
    • MODELING MEMORY COMPRESSION
    • 建模记忆压缩
    • US20110238943A1
    • 2011-09-29
    • US12748921
    • 2010-03-29
    • SARAVANAN DEVENDRANDavid Alan HepkinRajalakshmi SrinivasaRaghavan
    • SARAVANAN DEVENDRANDavid Alan HepkinRajalakshmi SrinivasaRaghavan
    • G06F12/00G06F12/02
    • H03M7/3068G06F12/08G06F12/1009G06F2212/401
    • A method, system, and computer usable program product for modeling memory compression are provided in the illustrative embodiments. A subset of candidate pages is received. The subset of candidate pages is a subset of a set of candidate pages used in executing a workload in a data processing system. A candidate page is compressible uncompressed data in a memory associated with the data processing system. The subset of candidate pages is compressed in a scratch space. A compressibility of the workload is computed based on the compression of the subset of candidate pages. Page reference information of the subset of candidate pages is received. A memory reference rate of the workload is determined. A recommendation is presented about a memory compression model for the workload in the data processing system.
    • 在说明性实施例中提供了用于建模存储器压缩的方法,系统和计算机可用程序产品。 接收候选页面的子集。 候选页面的子集是用于在数据处理系统中执行工作负载的一组候选页面的子集。 候选页面是与数据处理系统相关联的存储器中的可压缩的未压缩数据。 候选页面的子集在临时空间中被压缩。 基于候选页面子集的压缩来计算工作负载的可压缩性。 收到候选页子集的页面参考信息。 确定工作负载的内存参考速率。 介绍了关于数据处理系统中工作负载的内存压缩模型的建议。
    • 30. 发明申请
    • EXPANDING MEMORY SIZE
    • 扩大内存大小
    • US20110107054A1
    • 2011-05-05
    • US12611190
    • 2009-11-03
    • David Alan HepkinSatya Prakash SharmaSaurabh Nath SharmaRandal Craig Swanberg
    • David Alan HepkinSatya Prakash SharmaSaurabh Nath SharmaRandal Craig Swanberg
    • G06F12/00
    • G06F13/16
    • A method, system, and computer usable program product for expanding memory size are provided in the illustrative embodiments. A desired size of an expanded memory and a first information about a workload in the data processing system are received. A size of a compressed memory pool to use with the memory to make the desired size of the expanded memory available is computed. A representation of the memory is configured, the representation of the memory appearing to be of a size larger than the size of the memory, the representation of the memory being the expanded memory, and the size of the representation being the size of the expanded memory. The expanded memory is made available such that the memory in the data processing system is usable by addressing the expanded memory.
    • 在说明性实施例中提供了用于扩展存储器大小的方法,系统和计算机可用程序产品。 接收扩展存储器的期望大小和关于数据处理系统中的工作负载的第一信息。 计算与存储器一起使用的压缩存储器池的大小,以使扩展存储器的期望大小可用。 配置存储器的表示,存储器的表示看起来大于存储器的大小,存储器的表示是扩展存储器,并且表示的大小是扩展存储器的大小 。 扩展的存储器可用,使得数据处理系统中的存储器可通过寻址扩展的存储器来使用。