Have you ever gotten stuck in a video game level that you just can’t quite figure how to get out of? Of course you have, and usually, like most, you go to YouTube looking for a video to show you what you need to do to get past that point you’re stuck in. With Google’s online Stadia game controller, you just hit their assistant button that will help you beat the game. That likely sends you to a YouTube tutorial.
Today, the US Patent & Trademark Office published a patent application from Apple that could be a first for VR Gaming. A future Apple head mounted display system may allow users to call up a contextual digital assistant to assist them at a point in a game you’re stuck in. That’s nice in theory and could be a killer feature if it actually works in the future.
Below is one of Apple’s examples of using a contextual digital assistant for VR gaming.
Apple notes that in patent FIGS. 4R-4T below they illustrate various visual representations of a contextual CGR digital assistant providing navigation in the CGR environment #400.
In FIG. 4R, the image data characterizes the user #405 walking along a path (#442) towards a train station (#444). Atop the train station 444 is a clock. The time shown on the clock indicates that a train that the user is scheduled to ride is arriving soon.
In some implementations, a contextual trigger for a contextual CGR digital assistant that can provide navigation and guidance along the path to the training station is identified in the pass-through image.
The contextual trigger includes, for example, the time, the user profile, and the user’s body pose, and/or the path that is leading to the train station etc.
Dolphins are often regarded as one of the Earth’s most intelligent animals, performing actions such as navigation using the magnetic field of the earth. Further, dolphins are known to be able to interact with people as they are highly sociable with people. As such, based on these traits of dolphins, as shown in FIG. 4S, dolphins are selected as the visual representation of the contextual CGR digital assistant to provide navigation and lead the user to the train station. In FIG. 4S, water is displayed as an overlay on the path #442 with the dolphins swimming in the water.
In some implementations, depending on context including the calendar events associated with the user, such as train schedule and the current time, the animation of the dolphins adapts to the context.
For example, if the train schedule and the current time and/or location of the user indicate that the user is about to be late to catch the train, the computer-generated dolphins swim faster to guide the user towards the train station in order to catch the train.
On the other hand, if the context indicates that the user has plenty of time to catch the train, the computer-generated dolphins may swim relatively slower towards the train station.
When users are in a game or computer generated reality environment, there could be times when, as Apple puts it, the user is “overwhelmed” and perhaps disorientated.
More specifically, contextual CGR digital assistants are composited into a composited reality scenes, under appropriate contexts, and are provided to a user in response to determining that image data (e.g., image data for content presented and passed through the composited reality scene) includes a contextual trigger.
The contextual CGR digital assistants thus assist the user to obtain information from the composited reality scene by subtly drawing the user’s attention to relevant computer-generated media content.
Further, because representations of the contextual CGR reality digital assistants depend on what people expect a representation to do and/or cultural understanding of what is associated with a particular representation, using the contextual CGR digital assistants to draw the user’s attention to information in the composited reality scene provides a natural user experience.
For example, knowing a dog’s ability to run fast and fetch items, when the user notices a restaurant (real-world or CGR), a computer-generated dog can be used as a visual representation of a contextual CGR digital to quickly fetch restaurant information for the user.
Other examples include displaying a computer-generated cat that leads the user to an interesting place; or a computer-generated falcon that flies up and ahead to give plan-view perspectives on an area; or a computer-generated parrot that whispers key information, a computer-generated hummingbird or a computer-generated butterfly that points out small details, a computer-generated dolphin that leads the way to a location as the gaming example was regarding figure 4R and 4S.
Personifying these virtual animals as visual representations of contextual CGR digital assistants allow the user to obtain information from the CGR environment without feeling overwhelmed.
As a non-animal example, in some cultures, the big dipper constellation in the sky provides directions (e.g., cardinal directions). Based on such cultural understandings, a computer-generated big dipper constellation is displayed in the CGR scene as a contextual CGR digital assistant to provide general direction to a faraway place.
In another non-animal example, in a CGR scene, a computer-generated hot air balloon or a computer-generated robot (e.g., a computer-generated drone) floats in the sky. In response to a user’s gaze to the sky or a head movement by the user signifying looking up, the computer-generated hot air balloon or the computer-generated robot flies closer to the user in order to provide direction or navigation.
In the case of a computer-generated robot on the ground, the computer-generated robot can be a contextual CGR digital assistant to see through obstacles, fetch information from a small space, locate music records, etc. Thus, through the visual representations of these contextual CGR digital assistants, information is subtly conveyed to the user, and the extra capacity and/or ability to acquire the information through these contextual CGR digital assistants empower the user.
The way contextual CGR digital assistants will function may depend on the type of display the user is viewing the content in as it will vary with a VR headset, an iPhone, an iPad, a Heads-Up Display on a user’s windshield.
How do I know that a lot of the context is about being in a gaming environment? Well, do you know of any consumer drone that could be called up to tell you what or who is in a tent that’s closed? I rather doubt it. But in Apple’s patent FIG. 4H below, that’s exactly what the contextual digital assistant in the form of a drone is able to perform. How can that not be a game assistant?
Of course future Contextual CGR Digital Assistants will also be used in other environments in the future like tourism where smartglasses will be able to assist tourists navigate to hot restaurants, top museums and famed attractions using these next-gen assistants.
In this latest patent application, Apple is guiding us through potentially new software for a future HMD system and it’s likely just the beginning of software for future HMD hardware.
Being that it’s a first for Apple, it’s a little difficult to fully wrap our head around their concept. However, most of us are familiar with Siri, Google Assistant and Alexa for general inquiries in the real world and Apple’s vision is to create contextual digital assistants designed for future VR, AR and mixed reality systems that could introduce interaction with characters or animals that could assist us in navigating to places, or roads or allowing us to navigate through a restaurant menu.
Be sure to check out Apple’s patent application 20200098188 titled “Contextual Computer-Generated Reality (CGR) Digital Assistants,” for more details. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Inventors: Bar-Zeev; Avi; (Oakland, CA) ; Abdollahian; Golnaz; (San Francisco, CA) ; Chalmers; Devin William; (Oakland, CA) ; Huang; David H. Y.; (San Mateo, CA)