会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Gesture mapping for image filter input parameters
    • 图像滤波器输入参数的手势映射
    • US09531947B2
    • 2016-12-27
    • US14268041
    • 2014-05-02
    • Apple Inc.
    • David HaywardChendi ZhangAlexandre NaamanRichard R. DellingerGiridhar S Murthy
    • H04N5/232G06T3/00H04N101/00H04N5/262
    • G06F3/04845G06F3/0488G06F3/04883G06F2203/04808G06T3/0093G09G2340/0492H04N5/23216H04N5/2621H04N2101/00
    • This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.
    • 本公开涉及用于将特定用户交互(例如,手势)映射到各种图像处理例程的输入参数(例如,图像过滤器)的系统,方法和计算机可读介质,以提供无缝,动态和直观的体验 用于用户和软件开发人员。 这样的技术可以处理“相对”手势的处理,即,具有取决于设备的输入相对于输入的先前值有多少变化的值的手势以及具有“绝对”手势的手势 值仅依赖于设备输入的即时值。 此外,超出用户输入手势的设备的输入可以用作一个或多个图像处理例程的输入参数。 例如,设备在三维空间中的取向,加速度和/或位置可以用作特定图像处理例程的输入。
    • 4. 发明申请
    • Three Dimensional User Interface Effects On A Display By Using Properties Of Motion
    • 通过使用运动属性对显示器的三维用户界面效果
    • US20150106768A1
    • 2015-04-16
    • US14571062
    • 2014-12-15
    • Apple Inc.
    • Mark ZimmerGeoff StahlDavid HaywardFrank Doepke
    • G06F3/0481G06F3/01
    • G06F3/04815G06F3/005G06F3/013G06F3/017G06F3/0346G06F3/0488G06F2203/0381G06T15/20
    • The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
    • 本文公开的技术使用罗盘,MEMS加速度计,GPS模块和MEMS陀螺仪来推断用于手持设备的参考框架。 这可以提供真正的Frenet帧,即用于显示的X和Y向量,以及垂直于显示器指向的Z向量。 事实上,随着来自加速度计,陀螺仪和其他实时报告状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供连续的3D参考帧。 一旦知道了这个连续的参考框架,用户的眼睛的位置可以通过使用设备的前置摄像机来直接推断或计算。 随着用户眼睛的位置和用于显示器的连续的3D参考框架,可以创建和显示设备显示器上的对象的更逼真的虚拟3D描绘,并由用户进行交互。