7 C
New York
Friday, March 29, 2024

Eye Tracking On Universal And Personalized Search


Just Behave - A Column From Search Engine Land
In the past two columns, I’ve featured the interviews (Part 1, Part II) of where search might go in the next three years. The two themes consistently mentioned as the most important for the future have been personalization and blended search results.

Being a user-centric type of person, my first question was, “how will that impact the search user?” So, at Enquiro, we tried to shed some light on that question. We conducted an eye tracking study looking at interactions with Google’s Universal search results and we also created a mock up of what a personalized search page would look like. Today, I’d like to share a few of the findings with you.

The full report, along with the interviews, is available in a whitepaper (PDF).

Chunking of page rather than F shaped scan patterns

There was one fairly obvious difference we saw as soon as we compared a heat map from a typical blended result against the heat maps from a previous, pre-blended results. Our believe was that pictures would change the orientation point, leading to a distinctly different experience and this did appear to be the case.

justbehave-sep22-1.jpg

In the pre-blended world (heat map upper right), there was very common tendency to orient in the upper left corner (indicated by A) and to start the scanning from there, first vertically (the down arrow) and then scanning across when a title catches your attention (the right pointed arrow).

But in the blended results (left heat map), you’ll notice that while there still is some scanning in the very upper left (B), it doesn’t appear that the scanning starts there. Instead, the orientation appeared to happen by the graphic thumbnail in the results (C), and then started from there. Scanning seems to be predominantly to the side and below (D). Could this push scanning down, moving the Golden Triangle down on the page?

In fact, the presentation of a graphic element high in the results such as the image of the iPhone in the results shown below seems to result in a mental division of the page, which we refer to as “chunking” the page. It seems we extend mental boundaries from the edges of the picture and divide the page up for further scanning. Here is the sequence of scanning that we observed when these conditions were present.

justbehave-sep22-2.jpg

While we still seem to swing our eyes up to the upper left, we almost immediately (in under a second) move our eyes to the image (A) to determine if it’s relevant. A graphic image appears to be a powerful attractor to the eye. The tendency then is to determine if the listing beside the graphic (B) is relevant and unique in some way. Our brains tell us that because this listing has a unique treatment in the listings, it should be unique in some way. This is likely because universal results is still a new concept to us. Perhaps with time, we’ll become less sensitive to these listings. Regardless, at this point, we saw a tendency to scan this listing first. Then, because we still like to scan 3 or 4 listings before making our choice, we make our choice from the “chunks” above the image (C) and below it (D).

justbehave%20sep22%203.jpg

Rather than the top to bottom, left to right F shaped scan characteristics seen in the pre-blended world, we see more of an “E” shaped pattern, with the middle and first horizontal scan leg being where the image appears (see image above). The upper left top to bottom bias that was such a powerful factor in search behavior before seems to be lessened dramatically by the presence of an image.

Fencing of scanning

Another common behavior we observed was the “fencing” of scanning through the presence of images or graphic elements with straight sides.

justbehave-sep22-5.jpg

It seems we like to extend these straight lines to form mental boundaries that we use to divide up the page for scanning. In addition to creating the scanning “chunks” described earlier, this also can have the effect of restricting scanning beyond the boundary. For example, look at the two heat maps above. In both cases, it appears the presence of an image created a “fence” that restricted scanning below it and led to greater scanning above.

This of course depends on a quick scan to determine whether greater scent exists above or below the fence. In a search results page, if there are enough listings above the fence (given that we like to have at least a few options to consider), it’s natural to assume that we’ll find greater relevance above than below. But the fact remains, the presence of a straight sided graphic element leads to the extension of those sides to create boundaries and once divided, we tend to determine scent of these sections as a whole, rather than scan each of the listings individually. This is the same behavior that leads us to dismiss the ads on the right side rail as a group after a quick glance at the first one, rather than scanning them individually. “Chunking” and the presence of these “fences” changes our linear scanning behavior, causing us to break the page up more.

From looking at the interactions with Google’s universal results set, it seems there are a couple of significant developments that could impact how we interact with search results. The presence of a graphic on the page engages us in a different manner than simply showing us text images. There are two factors at play here. First, as Jakob Nielsen pointed out, we “grok” images a lot faster. A quick glance is enough for us to determine the meaning of the image. But secondly, and probably more importantly, an image fires different parts of our brain. Reading text is a abstract, logical process, but images appeal to us at an emotional level. Recent studies have shown that although our brains process different types of information in parallel, emotional inputs are processed much quicker than rational ones. Something that touches our emotions proves to be a powerful attractor for the eye.

However, just being an image is not enough. It also has to offer information scent. The image has to be relevant to our intent. And, because it is an image, we can determine relevance very quickly. We can make an assessment of both relevance and attractiveness of an image in a split second and determine if it’s worthy of our attention. If it passes than test, than we will reward it with a more deliberate scanning. For example, look at the two examples at right.
Images prove to impact scanning more in the earlier stages of the interaction, by attracting the eye and by doing so, creating a different scanning pattern. We never see a lot of heat on the image, because we don’t have to spend a lot of time to understand it, but we do see images exerting powerful pull on the eye.

justbehave-sep22-6.jpg

We can determine relevance fairly quickly and if an image proves to be irrelevant, we quickly move on. For example, in the heatmap above, a query for “spice girls” (don’t judge us by the scenarios we use!) brought up a YouTube parody clip that proved much lest relevant than the listings above and below it. Although the image caught our attention, it didn’t keep it. Notice that there was little lateral scanning of the title or the description snippet. There was no information scent.

justbehave-sep22-7.jpg

Compare this with the results for Apple’s iPhone. In this case, the image does prove to be relevant and attracts attention. This leads to scanning, and more importantly, early scanning of the result adjacent to the image. This is a classic example of an image providing information scent, drawing increased scanning of the adjacent listing.

The pull of Personalization

We also wanted to test for how personalization could impact the user experience. In this scenario, we broke the interaction up into two parts. First, we gave participants a chance to find out more about Apple’s iPhone. We didn’t restrict their online browsing, but we did track which sites they went to and which searches they did. Then, we used this information to mock up a search results page, for a second session, where we asked them to pick up where they left off in the first session and continue to find out more about the iPhone. We showed personalized results in organic positions 3, 4 and 5, tailored to where we felt the participant was in their cycle. The rest of the results were actual Google results.

justbehave-sep22-8.jpg

It was interesting to compare interactions in organic positions 3, 4 and 5, our test positions for the personalized results, in our personalized mock ups and the non personalized sessions. These personalized results, even though we didn’t move them up into the top two organic positions, performed remarkably well. The chart below show percentage of gaze time, percentage of fixations and actual clicks in the non-personalized vs personalized results. In the heat maps above, we show the areas being compared, the first heat map being non-personalized and the second heatmap being the personalized results.

justbehave-sep22-9.jpg

Obviously, for the test positions, personalization added a strong information scent component, with performance of these three listings doubling when compared to the non-personalized results. These three listings also pulled twice as many click throughs as the top two organic listings, a dramatic difference from the non-personalized results, where listings 3, 4 and 5 drew only one third as many click throughs as listings 1 and 2. Personalized results drew almost 4 times the clicks as the non-personalized results.

Now let’s look at what happens when we combine universal search results with personalized ones. The combination of universal results, and personalization, at least as we’ve represented it, produces a very interesting scan pattern that could have some significant implications for optimum placement of messaging on the page.

justbehave-sep22-10.jpg

Perhaps the easiest way to show this is to first look at how a typical scanning pattern would play out before the introduction of universal and personalized search results:
In the results set shown to the left, most users would orient in the upper left, just above (E). They would then start scanning down the page in a linear manner, first glancing at the top sponsored ads in Box “E”, then continuing down to the organic results in Box “C”. A consideration set would be chosen, likely consisting of the top two sponsored results and the top two organic ones, and the listing providing the best match of “scent” and intent would be chosen.

But let’s look at how the introduction of a graphic and 3 personalized results changes the scan pattern. Now, orientation happens on the picture and on the listing title immediately adjacent to it (A), and then the listing in Box B would likely be the first scanned. After this, the user would have to choose between the listings above and below. If personalization wasn’t present, we would assume the results at the top would offer greater scent, but if the personalized results benefit from personalization, this might not be the case. Attention would be drawn down (which seems to be the natural tendency of the eye) because of greater scent, present through personalization. We can see from the heat map below that the personalized results drew a significant amount of scanning attention away from the top of the page.

The introduction of a visually richer and potentially more relevant search results page will have a dramatic impact on how we scan that page. Prior to this study, we had seen remarkable consistency in scan patterns on Google, but a fair amount of variance in scan patterns between Google and Yahoo! and Microsoft. This was despite all three engines having a very similar layout, and was due primarily to small formating differences and how aggressive the engine was in showing top sponsored ads. But as the page becomes a more dynamic environment, we will adjust by drawing away from the top to bottom, left to right F shaped scan that produced the Golden Triangle to much more of a berry picking interaction that will vary according to the elements on the page. In the past, the definition of SERP real estate was fairly static: top and to the left. In the future, it seems it will be far more difficult to define.

Gord Hotchkiss is CEO of Enquiro, a search marketing firm that produces search engine user eye tracking studies and other research. The Just Behave column appears Fridays at Search Engine Land.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


Related Articles

Latest Articles