Wednesday, 30 November 2022

LinkedIn just released 3 new features for pages

LinkedIn just announced three new features for brands to promote products, monitor trends, and do more with newsletters. Let’s jump in.

Generate more newsletter subscribers. With LinkedIn Newsletters, you can publish recurring Articles and build a subscriber community via a one-time notification to your Page followers and ongoing notifications to your Newsletter subscribers.

If a member searches for your profile they should be able to easily find and subscribe to your newsletter. LinkedIn also suggests incorporating SEO best practices by optimizing your article titles, descriptions, and tags. LinkedIn will also automatically send your new followers a notification to subscribe.

Product pages. Your products can now be discovered via an in-platform search on LinkedIn. Buyers can search by product, company, or category to discover what they’re looking for. You can also use product highlights to showcase specific product content on your Page and point interested members to key details and conversations. You can also re-share content from your product community to your Page and add posts to your product highlights.

Competitor analytics dashboard. There is a new upgrade to the LinkedIn Pages Competitor Analytics dashboard that is now available on desktop and mobile, which can help you understand what competitors are doing and set your brand apart. You can now:

  • Track follower growth, recent posts, and engagement rates to see what your competitors are talking about and how members are responding, while also benchmarking your own content creation efforts.
  • Quickly surface trending content from competitors to stay on top of what’s happening in your industry and inspire your own content strategy.

Why we care. B2B brands and advertisers managing pages on LinkedIn should test and use these new tools to make the most out of their profiles. Since many brands may be rethinking their social strategies, companies should optimize the platform’s new features to gain first-party data with newsletter signups, as well as new ecommerce options and competitor metrics.

The post LinkedIn just released 3 new features for pages appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/38hmWgn

Google files lawsuit against company falsely promising Page 1 rankings

Google has filed a lawsuit against a company that allegedly was charging business owners money for free Google Business Profiles, selling fake reviews and promising first-page rankings.

Why we care. If anyone claims they are calling on behalf of Google and demands that you pay money for a free service, just don’t. Protect yourself. Do your research. Never pay such demands. It’s a scam.

G Verifier promised first-page rankings on Google. G Verifier, which Google’s suit alleges was run by Kaushal Patel of Ohio, threatened business owners that if they failed to pay (typically $99), their Business listings would be deactivated or marked as “permanently closed” and their positive reviews would be hidden – resulting in lost visibility and revenue.

Also according to the filing,

The G Verifier Websites also make false promises regarding search prioritization and ranking. For instance, G Verifier tells business owners that they will “[g]et the first page on Google search” and that “[i]f you buy the service from us, your Google Maps business location will come first in Google search.” These statements, which imply superior placement among organic search results, are false and deceitful. No service can guarantee that Google’s search engine, which uses a complex algorithm, will place a particular webpage on the first page of results, much less that it will be the very first result.

Google said “hundreds and hundreds” of Business Profile users contacted Google to report the scam since December 2021.

G Verifier also sold fake reviews. Google’s lawsuit noted that in G Verifier’s FAQ section, one question was: “Why should I buy Google reviews from you?” Also, G Verifier discussed its usage of Virtual Private Networks to get “reviews from the country or place of your choice.”

The website also allowed for the purchase of negative reviews, which could be used to harm competitors.

What Google says. In its blog post announcing the lawsuit, Google said:

“We are filing a lawsuit against scammers who sought to defraud hundreds of small businesses by impersonating Google through telemarketing calls. They also created websites advertising the purchase of fake reviews, both positive and negative, to manipulate reviews of Business Profiles on Google Search and Maps. This practice exploits entrepreneurs and small businesses — and it violates our policies on deceptive content.

Google, Protecting small businesses from scammers

This is not the first company to impersonate Google, nor will it be the last. So always beware of anybody who claims they are from Google demanding any money for first-page rankings or for 100% free services.

The post Google files lawsuit against company falsely promising Page 1 rankings appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/kqzw2iu

Cyber Monday broke records this year, with almost $12 billion in US sales

Cyber Monday in the US this year brought in over an 8% increase in sales since FY21. Globally, sales hit $46.2 billion, a 2.4% increase YoY. Black Friday also saw an increase this year of about 12%. This brings total sales for the weekend to around $68 billion. The figures are not adjusted for inflation, which plays a big part in the cost of goods also increasing.

Record numbers for Shopify. Shopify reported that 52 million consumers globally spent $7.5 billion on Shopify merchants, a 19 percent increase over last year.

“Consumers voted with their wallets over Black Friday and Cyber Monday by shopping with independent businesses,” said Shopify President Harley Finkelstein. “The future of commerce is on any surface, whether that’s shopping online or in store.”

Toys topped the most popular items shopped. The most popular toys shopped this year were:

  • Pokémon cards
  • Legos
  • Hot Wheels
  • Disney Encanto
  • LOL Surprise dolls
  • Cocomelon and Hatchimals toys
  • Gaming consoles PlayStation 5, Nintendo Switch and Xbox Series X
  • Games FIFA 23, God of War Ragnarök, Madden 23, NBA 2K23, and Pokémon Scarlet & Violet

Highest spending items. The average selling price during cyber week increased about 3%. Not surprisingly, the total amounts spent on the most popular items this year also increased, some as much as nearly 700%!

  • Toys, 684% increase
  • Electronics, 391% increase
  • Computers, 372% increase
  • Sporting goods, 466% increase
  • Appliances, 458% increase
  • Books, 439% increase
  • Jewelry, 410% increase

Honorable mentions. Other products topping the list of popularity were:

  • Smart TVs
  • Apple AirPods
  • Apple MacBooks
  • Tablets
  • Smart watches
  • Instant pots
  • Air fryers

Other factors weighing in. Aside from inflation and higher-priced items, this year we also saw an increase in trends surrounding discounts and chatbots.

  • Average discount rates hit 30%
  • The use of Buy Now Pay Later (BNPL) and other payment options rose by +5% YoY, while the average order value for BNPL went down by 5% in the U.S.
  • Chatbot messages increased 53% on Cyber Monday compared to 2021

Dig deeper. You can read the full articles from MediaPost here and here.

Why we care. Sales aren’t over yet. If you’re an ecommerce brand or advertiser, you may want to keep your ad campaigns or discounts running until after the holidays to capitalize on the upward trends.

The post Cyber Monday broke records this year, with almost $12 billion in US sales appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/B6mfMrS

How to use RStudio to create traffic forecasting models

There is a lot of fervor in the SEO industry for Python right now.

It is a comparably easier programming language to learn and has become accessible to the SEO community through guides and blogs.

But if you want to learn a new language for analyzing and visualizing your search data, consider looking into R.

This article covers the basics of how you can produce time series forecasts in RStudio from your Google Search Console click data.

But first, what is R?

R is “a language and environment for statistical computing and graphics,” according to The R Project for Statistical Computing

R isn’t new and has been around since 1993. Still, learning some of the basics of R – including how to interact with Google’s various APIs – can be advantageous for SEOs.

If you want to pick up R as a new language, good courses to learn from are:

But if you grasp the basics and want to learn data visualization fundamentals in R, I recommend Coursera’s guided project, Application of Data Analysis in Business with R Programming.

And then you also need to install:

What follows are the steps for creating traffic forecasting models in RStudio using click data.

Step 1: Prepare the data

The first step is to export your Google Search Console data. You can either do this through the user interface and exporting data as a CSV:

GSX exports

Or, if you want to pull your data via RStudio directly from the Google Search Console API, I recommend you follow this guide from JC Chouinard.

If you do this via the interface, you’ll download a zip file with various CSVs, from which you want the workbook named “Dates”:

Your date range can be from a quarter, six months, or 12 months – all that matters is that you have the values in chronological order, which this export easily produces. (You just need to sort Column A, so the oldest values are at the top.)


Get the daily newsletter search marketers rely on.

Processing…Please wait.


Step 2: Plot the time series data in RStudio

Now we need to import and plot our data. To do this, we must first install four packages and then load them.

The first command to run is:

## Install packages
install.packages("tidyverse")
install.packages("tsibble")
install.packages("fabletools")
install.packages("bsts")

Followed by:

## Load packages
library("tidyverse")
library("tsibble")
library("fabletools")
library("bsts")

You then want to import your data. The only change you need to make to the below command is the file type name (maintaining the CSV extension) in red:

## Read data
mdat <- read_csv("example data csv.csv",
col_types = cols(Date = col_date(format = "%d/%m/%Y")))

Then the last two commands in plotting your data are to make the time series the object, then to plot the graph itself:

## Make time series object
ts_data <- mdat %>%
as_tsibble(index = "Date")

Followed by:

## Make plot
autoplot(ts_data) +
labs(x = "Date", subtitle = "Time series plot")

And in your RStudio interface, you will have a time series plot appear:

Step 3: Model and forecast your data in RStudio

At this stage, it’s important to acknowledge that forecasting is not an exact science and relies on several truths and assumptions. These being:

  • Assumptions that historical trends and patterns shall continue to replicate with varying degrees over time.
  • Forecasting will contain errors and anomalies because your data set (your real-world clicks data) will contain anomalies that could be construed as errors.
  • Forecasts typically revolve around the average, making group forecasts more reliable than running a series of micro-forecasts.
  • Shorter-range forecasting is typically more accurate than longer-range forecasting.

With this out of the way, we can begin to model and forecast our traffic data.

For this article, I will visualize our data as a Bayesian Structural Time Series (BSTS) forecast, one of the packages we installed earlier. This graph is used by most forecasting methods. 

Most marketers will have seen or at least be familiar with the model as it is commonly used across many industries for forecasting purposes.

The first command we need to run is to make our data fit the BSTS model:

ss <- AddLocalLinearTrend(list(), ts_data$Clicks)
ss <- AddSeasonal(ss, ts_data$Clicks, nseasons = 52)
model1 <- bsts(ts_data$Clicks,
state.specification = ss,
niter = 500)

And then plot the model components:

plot(model1, "comp")

And now we can visualize one- and two-year forecasts.

Going back to the previously mentioned general forecasting rules, the further into the future you forecast, the less accurate it becomes. Thus, I stick to two years when doing this.

And as BSTS considers an upper and lower bound, it also becomes pretty pointless past a certain point.

The below command will produce a one-year future BSTS forecast for your data:

# 1-year
pred1 <- predict(model1, horizon = 365)
plot(pred1, plot.original = 200)

And you’ll return a graph like this:

1-year forecast graph

To produce a two-year forecasting graph from your data, you want to run the below command:

pred2 <- predict(model1, horizon = 365*2)
plot(pred2, plot.original = 365)

And this will produce a graph like this:

2-year forecast graph

As you can see, the upper and lower bounds in the one-year forecast had a range of -50 to +150, whereas the 2-year forecast has -200 to +600.

The further into the future you forecast, the greater this range becomes and, in my opinion, the less useful the forecast becomes.

The post How to use RStudio to create traffic forecasting models appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/6CnHNLh

5 ways to improve your content workflow and strategy in 2023 by Canto

Digital content creation and management seem to be more complex than ever. Workflows now need to accommodate remote workers and resources, worldwide offices, and security and privacy concerns, not to mention the growing pressure on content and creative teams to produce more content in less time and with fewer resources.

So how are the most successful teams currently executing production and managing their workflows? To answer this question and find out the best practices for improving efficiency, Canto surveyed nearly 650 professionals in the United States and the United Kingdom involved in the production, management and/or strategy for content and creative assets at their organization.

Tune into this webinar to learn the results of the survey and take an in-depth look at the content strategies, workflows and technologies that have made these organizations successful. You’ll take away valuable tips on how you can revamp your own content programs in 2023 and dive deep into the five areas to improve content workflow and strategy, including:

  1. Running a content audit for all relevant and current content
  2. Centralizing your content into a single location and applying metadata
  3. Building a technology stack that is optimized for collaboration
  4. Prioritizing your content workflows
  5. Focusing on brand consistency and speed to market

Planning and creating content is much harder than it used to be, with disconnected teams and a broken digital content supply chain. Watch this webinar so you can plan, create, manage and deliver your best content program in 2023.

The post 5 ways to improve your content workflow and strategy in 2023 appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/60je2cl

How to create long-form content that ranks, gets read and converts

There are many questions about content length in SEO and what ranks the best. 

While Google says there’s no specific word count they recommend, some studies have shown that long-form content tends to rank higher than short-form.

If you’re interested in writing long-form content, you probably want to make sure it’s going to rank, get read, and convert so you create an ROI for your effort.

What is long-form content? 

Most consider long-form content to be over 1,000 words. It’s a content piece that goes in-depth, offers extra value for the reader and includes more research, insights, and information than a quick read. 

Long-form content should leave the reader feeling comfortable with the subject and as if their questions have been answered and they know what to do with the information or how it applies to them.

What should you include in long-form content?

You want to create content that helps your reader. Think about them and what they need or want to learn from this piece. What questions do they have? 

It’s your responsibility to anticipate their questions and answer them in your work. If you’re unsure what questions they have, then think about what you want to ensure they know.

Use the following guide questions to identify which information is most important to help them get to the next stage:

  • What do they need to know?
  • Why do they need to know it?
  • What can they do with the information?
  • What baseline information should they know to make this make more sense?
  • What if they don’t have that baseline knowledge already?
  • How does this information impact them?
  • What’s their next step?

Don’t write a bunch of unnecessary fluff to try to hit some word count. 

You must ensure you’re providing value and helping your ideal customer so they want to consume more of your content. 

If you get them to the site but find nothing of value, they’ll be less likely to stay or return another time. 

Write to tell a story and provide value rather than writing to an arbitrary word count. Your content will be better in the long run.

Where do you start when creating long-form content to rank, get read and convert?

To start, make sure there’s a conversion path for your reader. Your content pieces need to tie to your products or services to drive revenue and conversions. 

If you’re answering questions for your potential customer and providing helpful information, they’re more likely to convert if you offer a solution to their issues. Be helpful, and link to additional information that might help them move to the next step. 

If you have an opt-in that ties to this content piece and is the next step for them, offer it in your work. You’re helping them and building your email list at the same time.

If you want your content to convert, you need to make sure there’s a conversion path. Everything you write needs to somehow tie to your core products and services. 

I teach my students to choose content pillars that link to their products and services and write about topics related to those subjects.

Creating a long-form content piece and ranking at the top of Google is great, but if it drives irrelevant traffic, it won’t convert, and that’s a waste of your efforts.


Get the daily newsletter search marketers rely on.

Processing…Please wait.


How do you make sure your long-form content ranks?

We all know we have no control over the Google ranking algorithm, but we also know how it works and what’s most important from an optimization standpoint.  

First, verify there’s search demand for your topic idea, choose a keyword (or keywords) you can rank for, write for your audience, and finally, optimize your content piece.

Make sure there’s interest in your topic

Start by making sure there’s an audience for your content piece. 

It may seem like a great idea to you. However, if no one is searching for information on the subject, it’s unlikely that you’ll get much traffic due to low demand. 

That said, search volume is not the most critical factor in choosing a keyword, and we’ll talk more about that.

Brainstorm the topics you think you want to cover, and then go to Google and see what’s there today. 

  • Who’s written on the subject you’re considering using for your content piece? 
  • Is there already information on the topic? 
  • Do you have a new angle, new insights, or something more to add to the conversation? 

If not, this might not be the best topic. Search the topic and see what shows up in Google Suggested Search.

Is there something closely related to your topic that Google suggests, or are there questions related to it in the People Also Ask section? 

If you see your topic idea in either of those places, that’s good because it means there’s interest in your potential topic.

Research keywords

Once you know your topic is viable, use your favorite keyword research tool to identify the keyword or keywords you want to target for this new long-form content piece. 

Long-form pieces can rank for multiple keywords a bit easier than short-form pieces just due to the length of the content piece. 

Choose your keywords wisely. Look for a primary keyword with good search volume and the ability for your website to rank on Page 1.

Choose your keywords

Go to Google and see who’s currently ranking on Page 1 for the keyword you’re considering using as your primary one. 

  • Are the websites similar to yours? 
  • Are they more prominent brands or companies? 
  • How in-depth are the articles? 
  • Can you provide additional insight or value (not just more words) than the sites currently ranking?

If you see other websites similar to yours and content pieces that you feel aren’t as in-depth or are missing information on the topic you want to write about, then you’re probably making a good choice in your keyword selection.

Choose the keyword with the highest search volume that your website has the best chance of ranking for and is the word your Ideal Customer uses when searching for information on this subject.

How to make sure your content gets read

Now it’s time to write. Go back to your brainstorming notes. 

What information do you need to include to answer your readers' questions?

Be sure you have that information. Sort it in a way that it’s easy to follow and understand so your reader wants to continue. 

A long-form content piece is a time commitment for someone to read.

Thus, you must provide value, insights, statistics, and things that are unique from something else they might have read on the subject before – or they won’t continue reading.

Format your piece in a reader-friendly way. This is especially important with longer pieces. Consider:

  • Using bullets and lists – white space is your friend.
  • Using headers (suitable for SEO and your reader). 
  • Breaking your text up into small, easy-to-read chunks. 
  • Keeping your sentences and paragraphs short.

It’s better to have many small paragraphs broken up with bullets and numbers than big blocks of text. 

People will shy away from reading a piece if the content isn’t formatted in a reader-friendly way.

Your final step is to optimize your content piece

Use your keyword in all of your SEO elements. Make sure it’s in the first paragraph of the copy, which it should be since your keyword is closely tied to your content topic. In most instances, your keyword will be in the title of your piece.

Add your keyword to your URL, image file name, and header tags, and use it throughout your copy. 

Focus on providing value, being helpful, and offering information your ideal customer needs rather than how often you use your keyword. You’ll use it naturally by concentrating on your reader.

Done right, long-form content is worth the investment

Long-form content can be a significant time investment. It takes longer to write in-depth pieces than quick bites or short-form. 

However, the payoffs can be great. Long-form pieces often rank higher in the search results than short pieces. 

And if you’re creating content with an audience, you can rank for and tie to your business, bring relevant traffic to your website, and hopefully, get the conversion. 

It’s worth testing long-form content if you haven’t done it yet. Not every piece you write has to be long, but those most important to your business should be longer and more in-depth.

The post How to create long-form content that ranks, gets read and converts appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/7z1NTcs

Tuesday, 29 November 2022

Webinar: Do more with less to get ahead in 2023 by Cynthia Ramsaran

As 2022 winds down, marketers are being asked to focus on efficiency and “do more with less.” The most successful have leveraged tools such as calculated metrics, artificial intelligence and real-time insights.

In this webinar, learn how a financial institution with over 21 million active customers connects its customer data, segments audiences faster and delivers personalized experiences in real time.

Join Salesforce in this free webinar and learn real use cases on finding success and business results by sending fewer communications that are more relevant and targeted.

Register today for “Do More with Less: Connect Customer Data to Drive Marketing Efficiency,” presented by Salesforce.


Click here to view more Search Engine Land webinars.

The post Webinar: Do more with less to get ahead in 2023 appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/lRz4GKw

How to improve E-A-T for websites and entities

The concept of expertise, authoritativeness and trustworthiness (E-A-T) has played a central role in ranking keywords and websites – and not just in recent years. 

Speaking at SMX Next, Hyung-Jin Kim, VP of Search at Google, announced that Google has been implementing E-A-T principles for ranking for more than 10 years.

Why is E-A-T so important?

In his SMX 2022 keynote, Kim noted:

“E-A-T is a template for how we rate an individual site. We do it to every single query and every single result. It’s pervasive throughout every single thing we do.”

From this statement, it is clear that E-A-T is important not just for YMYL pages but for all topics and keywords. Today, E-A-T seemingly impacts many different areas in Google’s ranking algorithms.

For several years, Google has been under much pressure about misinformation in search results. This is underscored in the white paper “How Google fights disinformation,” presented in February 2019 at the Munich Security Conference. 

Google wants to optimize its search system to provide great content for the respective search queries depending on the user’s context and consider the most reliable sources. The quality raters play a special role here.

“A key part of our evaluation process is getting feedback from everyday users about whether our ranking systems and proposed improvements are working well. But what do we mean by “working well”? We publish publicly available rater guidelines that describe in great detail how our systems intend to surface great content.” 

Evaluation according to E-A-T criteria is crucial for quality raters.

“They evaluate whether those pages meet the information needs based on their understanding of what that query was seeking, and they consider things like how authoritative and trustworthy that source seems to be on the topic in the query. To evaluate things like expertise, authoritativeness, and trustworthiness—sometimes referred to as “E-A-T”—raters are asked to do reputational research on the sources.” 

A distinction must be made between the document’s relevance and the source’s quality. The ranking magic at Google takes place in two areas. 

This becomes clear when you take a look at the statements made by various Google spokespersons about a quality score at the document and domain level.

In his SMX West 2016 presentation titled How Google Works: A Google Ranking Engineer’s Story, Paul Haahr shared the following:

“Another problem we were having was an issue with quality and this was particularly bad. We think of it as around 2008, 2009 to 2011. We were getting lots of complaints about low-quality content and they were right.

We were seeing the same low-quality thing but our relevance metrics kept going up and that’s because the low-quality pages can be very relevant.

This is basically the definition of a content form in our vision of the world so we thought we were doing great.

Our numbers were saying we were doing great and we were delivering a terrible user experience and turned out we weren’t measuring what we needed to. So what we ended up doing was defining an explicit quality metric which got directly at the issue of quality. It’s not the same as relevance

And it enabled us to develop quality related signals separate from relevant signals and really improve them independently. So when the metrics missed something, what ranking engineers need to do is fix the rating guidelines… or develop new metrics.”

(This quote is from the part of the talk on the quality rater guidelines and E-A-T.)

Haahr also mentioned that:

  • Trustworthiness is the most important part of E-A-T. 
  • The criteria mentioned in the quality rater guidelines for bad and good content and websites, in general, are the benchmark pattern for how the ranking system should work.

In 2016, John Mueller stated the following in a Google Webmaster Hangout:

“For the most part, we do try to understand the content and the context of the pages individually to show them properly in search. There are some things where we do look at a website overall though.

So for example, if you add a new page to a website and we’ve never seen that page before, we don’t know what the content and context is there, then understanding what kind of a website this is helps us to better understand where we should kind of start with this new page in search.

So that’s something where there’s a bit of both when it comes to ranking. It’s the pages individually, but also the site overall.

I think there is probably a misunderstanding that there’s this one site-wide number that Google keeps for all websites and that’s not the case. We look at lots of different factors and there’s not just this one site-wide quality score that we look at.

So we try to look at a variety of different signals that come together, some of them are per page, some of them are more per site, but it’s not the case where there’s one number and it comes from these five pages on your website.”

Here, Mueller emphasizes that in addition to the classic relevance ratings, there are also rating criteria that relate to the thematic context of the entire website. 

This means that there are signals Google takes into account to classify and evaluate the entire website thematically. The proximity to the E-A-T rating is obvious.

Various passages on E-A-T and the quality rater guidelines can be found in the Google white paper previously mentioned:

“We continue to improve on Search every day. In 2017 alone, Google conducted more than 200,000 experiments that resulted in about 2,400 changes to Search. Each of those changes is tested to make sure it aligns with our publicly available Search Quality Rater Guidelines, which define the goals of our ranking systems and guide the external evaluators who provide ongoing assessments of our algorithms.”

“The systems do not make subjective determinations about the truthfulness of webpages, but rather focus on measurable signals that correlate with how users and other websites value the expertise, trustworthiness, or authoritativeness of a webpage on the topics it covers.”

“Ranking algorithms are an important tool in our fight against disinformation. Ranking elevates the relevant information that our algorithms determine is the most authoritative and trustworthy above information that may be less reliable. These assessments may vary for each webpage on a website and are directly related to our users’ searches. For instance, a national news outlet’s articles might be deemed authoritative in response to searches relating to current events, but less reliable for searches related to gardening.”

“Our ranking system does not identify the intent or factual accuracy of any given piece of content. However, it is specifically designed to identify sites with high indicia of expertise, authority, and trustworthiness.”

“For these “YMYL” pages, we assume that users expect us to operate with our strictest standards of trustworthiness and safety. As such, where our algorithms detect that a user’s query relates to a “YMYL” topic, we will give more weight in our ranking systems to factors like our understanding of the authoritativeness, expertise, or trustworthiness of the pages we present in response.”

The following statement is particularly interesting as it becomes clear how powerful E-A-T can be in certain contexts and concerning events compared to classic relevance factors.

“To reduce the visibility of this type of content, we have designed our systems to prefer authority over factors like recency or exact word matches while a crisis is developing.”

The effects of E-A-T could be seen in various Google core updates in recent years.

E-A-T influences rankings – but it is not a ranking factor

Plenty of discussions in recent years centered on whether E-A-T influences rankings and, if so, how.  Almost all SEOs agree it is a concept or a kind of layer that supplements the relevance scoring. 

Google confirms that E-A-T is not a ranking factor. There is also no E-A-T score. 

E-A-T comprises various signals or criteria and serves as a blueprint for how Google’s ranking algorithms should determine expertise, authority and trust (i.e., quality).

However, Google also speaks of a rating applied algorithmically to every search query and result. In other words, there must be signals or data that can be used as a basis for an assessment.

Google uses the manual ratings of the search evaluators as training data for the self-learning ranking algorithms (keyword: supervised machine learning) to identify patterns for high-quality content and sources. 

This brings Google closer to the E-A-T evaluation criteria in the quality rater guidelines.

If the content and sources rated as high or poor by the search evaluators repeatedly show the same specific pattern and the frequency of these pattern properties reaches a threshold value, Google could also take these criteria/signals into account for the ranking in the future.

In my opinion, E-A-T is made up of different origins:

  • Entity-based rating.
  • Coati (ex-Panda) based rating.
  • Link-based rating.

To rate sources such as domains, publishers or authors, Google accesses an entity-based index such as the Knowledge Graph or Knowledge Vault. Entities can be brought into a thematic context, and the entities’ connection can be recorded.

To evaluate the content quality related to individual documents and the entire domain, Google can fall back on tried and tested algorithms from Panda or Coati today.

PageRank is the only signal for E-A-T officially confirmed by Google. Google has been using links to assess trust and authority for over 20 years.

Possible E-A-T origins

Based on Google patents and official statements, I have summarized concrete signals for an algorithmic E-A-T evaluation in this infographic.

Possible factors for an E-A-T evaluation

SEOs must differentiate these possible signals to positively influence E-A-T.

On-page

Signals that come from your own website. This is about the content as a whole and in detail.

Off-page

Signals coming from external sources. This can be external content, videos, audio or search queries that can be crawled by Google. 

Links and co-occurrences from the name of the company, the publisher, the author or the domain in connection with thematically relevant terms are particularly important here. 

The more frequently these co-occurrences appear, the more likely the main entities have something to do with the topic and the associated keyword cluster. 

These co-occurrences must be identifiable or crawlable by Google. Only then can you be recognized by Google and included in the E-A-T concept. In addition to co-occurrences in online texts, co-occurrences in search queries are also a source for Google.

Sentiment

Google uses natural language processing to analyze the mood around people, products and company entities. 

Reviews from Google, Yelp or other platforms can be used here with the option of leaving a rating. 

Google patents deal with this, such as “Sentiment detection as a ranking signal for reviewable entities.”

Through these findings, SEOse can derive concrete measures for positively influencing E-A-T signals.


Get the daily newsletter search marketers rely on.

Processing…Please wait.


15 ways to improve your E-A-T

With E-A-T, Google is ultimately trying to adapt "thematic brand positioning" that marketers have used for centuries to establish brands in combination with messages in people's minds. 

The more often a person perceives a person and/or a provider in a certain thematic context, the more trust they will give to the product, the service provider, and the medium.

In addition, authority increases if this entity is:

  • Mentioned more frequently in thematic contexts than other market participants.
  • Positively referenced by other credible and authoritarian sources. 

Through these repetitions, a neural network in the brain is retrained. We are perceived as a brand with thematic authority and trustworthiness.

As a result, Google's neural network also learns who is an authority and, thus, trustworthy for one or more topics. This applies in particular to co-occurrences in the awareness, consideration and preference phases.

The further you position yourself in the customer journey for topics, the broader the keyword cluster Google associates with. If this link is drawn, you belong to the relevant set with your own content.

These co-occurrences can be generated, for example, through:

  • Appropriate on-page content.
  • Appropriate internal linking.
  • Appropriate off-page content.
  • External/incoming links, anchor texts and the environment of the link influencing search patterns.

You have a lot of creative leeways, especially with off-page signals. But there are also no typical SEO measures that cause co-occurrence here.

As a result, those responsible for SEO are increasingly becoming the interface between technology, editing, marketing and PR.

Below is a summary of possible concrete measures to optimize E-A-T.

1. Create sufficient topic-relevant content on your own website 

Building semantic topic worlds within your website shows Google that you have in-depth knowledge and expertise on a topic. 

2. Link semantically-appropriate content with the main content 

When building up semantic topic worlds, the individual content should be meaningfully linked to one another. 

A possible user journey should also be taken into account. What interests the consumer next or additionally? 

Outgoing links are useful if they show the user and Google that you are referring to other authoritative sources.

3. Collaborate with recognized experts as authors, reviewers, co-authors and influencers

"Recognized" means that they are already recognized online as experts by Google through:

  • Online publications.
  • Amazon author profiles.
  • Their own blogs and websites.
  • Social media profiles.
  • Profiles on university websites.
  • And more.

It is important that the authors show references that can be crawled by Google in the respective thematic context. This is particularly recommended for YMYL topics. 

Authors who themselves have long published web-findable content on the topic are preferable, as they are most likely known as an entity in the topical ontology.

4. Expand your share of content on a topic

The more content a company or author publishes on a topic, the greater its share of the document corpus relevant to the topic. 

This increases the thematic authority on the topic. Whether this content is published on your website or in other media doesn't matter. What’s important is that they can be recorded by Google. 

For instance, the proportion of your own topic-relevant content can be expanded beyond your website through guest articles in other relevant authority media. The more authoritative they are, the better.

Other ways to increase your share of content include:

  • Creating thematically appropriate guest posts and linking this content with your own website and social media profiles.
  • Arranging interviews on relevant topics.
  • Giving lectures at specialist events.
  • Participating in webinars as a speaker.

5. Write text in simple terms

Google uses natural language processing to understand content and mine data on entities. 

Simple sentence structures are easier for Google to capture than complex sentences. You should also call entities by name and only use personal pronouns to a limited extent. Content should be created with logical paragraphs and subheadings in mind for readability. 

6. Use TF-IDF analyses for content creation

Tools for TF-IDF analysis can be used to identify semantically related sub-entities that should appear in content on a topic. Using such terms demonstrates expertise.

7. Avoid superficial and thin content

The presence of a lot of thin or superficial content on a domain might cause Google to devalue your website in terms of quality. Delete or consolidate thin or superficial content instead.

8. Fill the knowledge gap

Most content you see online is a curation or copy of existing information that is already mentioned in hundreds or thousands of other pieces of content. 

True expertise is achieved by adding new perspectives and aspects to a topic.

9. Adhere to a consensus 

In a scientific paper, Google describes knowledge-based trust as how content sources are evaluated based on the consensus of information with popular opinion. 

This can be crucial, especially for YMYL topics (i.e., medical topics), to rank your content on the first search results.

Information and statements should be backed up with facts and supported with appropriate links to authoritative sources.

This is especially important for YMYL topics.

11. Be transparent about authors, publishers and their other content and commitments

Author boxes are not a direct ranking signal for Google, but they can help to find out more about a previously unknown author entity. 

An imprint and an “About us” page are also advantages. Also, include links to:

  • Commitments.
  • Content.
  • Profiles as authors, speakers, and association memberships.
  • Social media profiles.

Entity names are advantageous as link texts to your representations. Structured data, such as schema markup, is also recommended.

12. Avoid too many advertising banners and recommendation ads 

Aggressive advertising (i.e., Outbrain or Taboola ads) that influences website use can lead to a lower trust score.

13. Create co-competition outside of your own website through marketing and communication

With E-A-T, it is vital to position yourself as a brand thematically by:

  • Linking to subject-related specialist publications from your website so that Google can assign them more quickly and easily.
  • Building links from thematically relevant environments.
  • Offline advertising to influence search patterns on Google or create suitable co-occurrences in search queries (TV advertising, flyers, advertisements). Note that this is not pure image advertising but rather advertising that contributes to positioning in a subject area.
  • Co-operating with suppliers or partners to ensure suitable co-occurrences.
  • Creating PR campaigns for suitable co-occurrences. (No pure image PR.)
  • Generating buzz in social networks around your own entity.

14. Optimize user signals on your own website 

Analyze search intent for each main keyword. The content’s purpose should always match the search intent.

15. Generate great reviews

People tend to report negative experiences with companies in public.

This can also be a problem for E-A-T, as it can lead to negative sentiment around the company. That's why you should encourage satisfied customers to share their positive experiences.

The post How to improve E-A-T for websites and entities appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/JBVxf3X

Monday, 28 November 2022

Twitter has launched 3 new ad targeting options

Twitter has just launched new ad targeting options, including a new ‘Conversions’ objective they originally announced back in August.

The new ‘Conversions’ objective. Advertisers are now able to focus their ad campaigns on those users who are most likely to take specific actions.

Previously, Twitter advertisers were able to optimize campaigns to focus on clicks, site visits, and conversions. But now you can further optimize for page views, content views, add-to-cart, and purchases.

As I mentioned, the updates were announced in August but released just before Thanksgiving.

What Twitter says. “Website Conversions Optimization (WCO) is a major rebuild of our conversion goal that will improve the way advertisers reach customers who are most likely to convert on a lower-funnel website action (e.g. add-to-cart, purchase).”

So instead of just aiming to reach people who are likely to tap on your ad, you can expand that focus to reach users that are more likely to take next-step actions beyond that, like:

  • Add-to-cart
  • Purchase
  • Register contact info
  • Subscribe

“Our user-level algorithms will then target with greater relevance, reaching people most likely to meet your specific goal – at 25% lower cost-per-conversion on average, per initial testing.”

Dynamic Product Ads. Dynamic Product Ads were initially launched in 2016 (a version at least). But this new update integrates a more privacy-focused approach, in order to optimize ad performance with potentially fewer signals.

Collection Ads. The Collection Ads format enables advertisers to share a primary hero image, along with smaller thumbnail images below it.

Twitter says, “The primary image remains static while people can browse through the thumbnails via horizontal scroll. When tapped, each image can drive consumers to a different landing page.”

Dig deeper. Read the full article on the Twitter blog.

Why we care. Yes, Twitter is still releasing new ad updates, despite cutting most of its staff. But it’s likely that these updates were almost completely finished anyway when Musk took over since they had been in development for months.

Advertisers who are still on Twitter should test the new ad features and options to gauge whether they’re valuable additions.

The post Twitter has launched 3 new ad targeting options appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/2JxsP1o

Yahoo now has a 25% stake in Taboola

Yahoo has just finalized a 30-year exclusive advertising partnership in Taboola, which would secure a 25% stake in the company. This deal will allow Yahoo to use Taboola’s tech to manage its native ads.

Taboola’s native edge. Taboola specializes in native ads which can be found on popular sites like CNN and MSN. The ads typically look like part of the website and can be informative or entertaining.

However, shares of Taboola have fallen nearly 80% since last year. In January, it merged with a special purpose acquisition company and was valued at $2.6 billion.

The deal with Yahoo gives Taboola the exclusive license to sell native ads across Yahoo’s sites, and the companies will share revenue from those ad sales. The companies did not disclose the terms of the revenue split. The deal will make Yahoo Taboola’s largest shareholder.


Get the daily newsletter search marketers rely on.

Processing…Please wait.


Meta and TikTok weigh in. Executives at companies like Meta and TikTok have warned that advertisers skittish about the economy have pulled back on their spending. But Jim Lanzone, the chief executive of Yahoo, said in an interview that the deal with Taboola puts both companies in a good position for when the ad market revives, the NY Times said.

“I’m thinking, you know, five, 10, 30 years,” Lanzone said. “Digital advertising has huge wind at its back over the long term.” He added that while the company will continue to try to bring in money in other ways, such as expanding its subscription business or investing in e-commerce, “we have hundreds of millions of people consuming news and sports and finance on market-leading properties that are heavily monetized through advertising — and will continue to be.”

Dig deeper. You can read the full article from the NY Times here.

Why we care. Advertisers who run native ads may now have another option to expand their reach. Yahoo also commented that they were attempting to "build up each of its products within its mini-media empire and capitalize on its audience." If this happens it will give advertisers and brands a more competitive edge in choosing which platforms to spend their marketing dollars.

The post Yahoo now has a 25% stake in Taboola appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/JNaCUwt

Black Friday sales up nearly 12% from 2021

Inflation and “sagging consumer sentiment” accounted for relatively muted Black Friday in the US this year. However, the numbers show that shopping centers are making a comeback, as people are enjoying brick-and-mortar shopping experiences again.

Adobe Analytics said online sales rose 2.3% to $9.12 billion. The company’s initial projection was $9 billion, (for perspective, this percentage increase lagged far behind the country’s inflation rate, which is running at almost 8%).

Shopping statistics. Brick-and-mortar retailers, who for the last two Black Fridays contended with Covid-19 outbreaks and restrictions, saw in-store visits tick up this year by 2.9% compared to 2021, according to a report by Sensormatic Solutions. Interestingly, visits to physical stores on Thanksgiving Day increased by 19.7% compared to last year.

Enclosed mall traffic increased 1.2%, and traffic to non-malls, such as strip centers and standalone stores, increased 4.7% compared to Black Friday 2021, Bloomberg reports. Some reports indicate that though the crowds were smaller, shoppers waited in line much longer due to many stores experiencing staffing issues.

Shopify merchants break records. Shopify announced a record-setting Black Friday with sales of $3.36* billion from the start of Black Friday in New Zealand through the end of Black Friday in California.

  • Peak sales per minute: $3.5 million USD on Black Friday at 12:01 PM EST
  • Top selling countries and cities where shoppers made purchases from: United States, United Kingdom and Canada, with the top-selling cities on Black Friday including London, New York, and Los Angeles
  • Top product categories: Apparel & accessories, followed by health & beauty, and home & garden, with trending products including Snocks GmbH (Boxershorts), rhode (peptide glazing fluid), and Brooklinen (Luxe Core Sheet Set)**
  • Average cart price: $102.31 USD or $105.10 USD on a constant currency basis  
  • 15%: Cross-border orders worldwide on Black Friday as a percentage of total orders 
  • 27%: Growth in POS sales made by Shopify merchants globally over last year’s Black Friday
  • Economic uncertainty drove many to alternative payment methods. For example, buy now, pay later (BNPL) orders jumped 78% the week of November 19 compared with the previous week. BNPL revenues rose 81% over the same period

What Shopify says. “Black Friday Cyber Monday has grown into a full-on shopping season. The weekend that started it all is still one of the biggest commerce events of the year, and our merchants have broken Black Friday sales records again,” said Harley Finkelstein, President of Shopify. “Our merchants have built beloved brands with loyal communities that support them. This weekend, we’re celebrating the incredible power of entrepreneurship on a global stage.” 

Dig deeper. You can read the entire article from Bloomberg here.

Why we care. Consumers are still spending. Ecommerce merchants and advertisers who promote online should still prioritize and continue to run ads, even after the BFCM holiday. Though many are still facing supply chain issues, testing different discounts and offers should be top of mind going into the Christmas season.

The post Black Friday sales up nearly 12% from 2021 appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/CjZyHXN

Microsoft is planning to double the size of its ad business to $20 billion

Microsoft is looking to double its ad business from $10 billion a year to $20 billion in revenue. Leadership didn’t specify a timeframe, but if their goal is reached, it will become the sixth-largest digital ad seller worldwide.

Microsoft’s multiple ad properties. Microsoft’s ad properties include Bing search, Xbox, MSN and many other websites that use Xandr to sell digital ads. Microsoft also introduced vertical ad formats, credit card ads, and expanded its audience network into 66 new markets.

Search and news revenue up 16%. Microsoft reported their FY23 Q1 ad revenue is up 16%. But CFO Amy Hood told analysts during the earnings call that “reductions in customer advertising spend, which also weakened later in the quarter, impacted search in advertising and LinkedIn marketing solutions.”

But the search and advertising bump was “driven by higher search volume and Xandr.” Nadella says Microsoft has “expanded the geographies we serve by nearly 4x over the past year.” Microsoft Edge may also be helping out with Bing search and advertising revenues. “Edge is the fastest growing browser on Windows and continues to gain share as people use built-in coupon and price comparison features to save money,” says Nadella.

Rob Wilk, corporate vice president of Microsoft Advertising said that he intends to make buying ads across assets easier for partners. “We have a lot of plumbing work to do,” he said in an October interview. “Microsoft also needs to differentiate itself from competitors that have similar properties but have far more mature advertising businesses, like Google. Wilk said Microsoft is more “partner oriented” than Google.”


Get the daily newsletter search marketers rely on.

Processing…Please wait.


Netflix. The new Netflix partnership was launched this month and allows advertisers to purchase ads through the demand side platform Xandr. Microsoft will take a reseller fee, and experts predict that the partnership will be a huge revenue driver, easily clearing $10 billion in ad sales or more.

Gaming. Another great revenue driver for Microsoft is gaming. The acquisition of Activision Blizzard is still pending, but in-game ad revenue could be a unique selling point if the ads can be bought through Xandr.

Dig deeper. You can read the full article from Business Insider here.

Why we care. As Microsoft's ad business grows, it means greater opportunities for advertisers who are looking to expand their reach beyond Google and Facebook. Additional options such as in-game ads, Netflix, and the demand-side platform Xandr will open doors for both publishers and brands alike.

The post Microsoft is planning to double the size of its ad business to $20 billion appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/V1aET7S

Rich results: 22 things every SEO pro needs to know

Rick and Morty. 

Hall and Oates. 

Coffee and bourbon. 

Rich results and structured data. 

All four: iconic duos. But only one can generate over $80k in revenue once added to your website. 

On May 17, 2016, Google introduced the concept of “rich cards.” Google has revived rich cards into what SEO professionals call rich results today. Rich results were created to make a more engaging experience on Google’s search result pages. 

Sample rich result on the SERPs

The result of rich results is a crowd-pleasing SEO tactic that produces an average of 58 clicks per 100 queries. Rich results are a dry and smooth SEO move, with flavors of structured data sprinkled with schema markup alongside sweeter code of JSON-LD. 

In the words of the 2022 Women in Tech SEO mentor, Anne Berlin:

  • “For my money, I’d recommend implementing your rich result strategy from the get-go with as much detail as possible to extend the window of time before a change in policy forces you to make an overhaul to return your site to eligibility for a rich result which was previously driving traffic.”

Rich results make a great eye-opener and a quick win to start a new SEO project. And our need for SEO quick wins has never been greater.

Ahead are 22 things every SEO professional needs to know about rich results.

1. Rich snippets (previously rich cards) are now officially called rich results

Let’s be honest: Google changes names almost as often as Kanye West. As of today, Kanye West’s official name is now Ye. It’s got a Cher and Madonna vibe to it. 

I digress.

When Google first released what SEO professionals call rich results today, rich results were called rich snippets, then rich cards.

Rich snippets and rich cards are rich results.  

If you call rich results, rich snippets, or rich cards, you might as well start talking about your troll collection from the ’90s. (Remember yesterday? We were so young then.)

2. Rich results, schema, and structured data are not the same

There is a difference between rich results, schema markup, and structured data. 

The difference between rich results, schema markup, and structured data. 

Schema markup (also called structured data format)

Google doesn’t describe exactly what schema markup is because “schema” is a part of a language from schema.org

While schema.org is helpful, Google clarifies that SEO professionals should rely on Google Search Central documentation because schema.org isn’t focused only on Google search behavior. 

Google refers to schema markup as “structured data format.”

Think of schema markup (or structured data format) as the language needed to create structured data. 

Schema markup (or structured data format) is required before you can move on to structured data. 

Structured data

Again, in Google’s words:

Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on.”

Google Search Central documentation on structured data markup

Ryan Levering, a software engineer at Google, breaks structured data down even further.

Rich results

In Google’s words:

Rich results are experiences on Google Search that go beyond the standard blue link. They’re powered by structured data and can include carousels, images, or other non-textual elements. Over the last couple years we’ve developed the Rich Results Test to help you test your structured data and preview your rich results.”

Rich Results Test update - July 2020

3. Always use Google documentation instead of schema.org 

Schema.org is often used by SEO professionals when writing schema markups. 

But the reality is Google wants you to use Google’s documentation. Not surprising, right?

Google’s own John Mueller himself answers this question in an episode of Ask A Google Webmaster

And Google directly states it in its Introduction to Structured Data documentation

Use Google's documentation

4. There are 32 different rich result types

In Google’s search gallery for structured data today, there are 32 different rich result types. 

The rich result types include: 

  • Article
  • Book
  • Breadcrumb
  • Carousel (only available for recipes, courses, restaurants, and movies)
  • Course
  • Dataset
  • Education Q&A 
  • Employer aggregate rating
  • Estimated salary
  • Event
  • Fact check
  • FAQ
  • Home activities
  • How-to
  • Image metadata
  • Job posting
  • Learning video
  • Local business
  • Logo
  • Math solvers
  • Movie
  • Podcast
  • Practice problems
  • Product
  • Q&A
  • Recipe
  • Review snippet
  • Sitelinks search box
  • Software app
  • Speakable
  • Subscription and paywalled content
  • Video

5. Google Search Console doesn’t support all rich result types in its report

Have you ever looked at the Enhancements section in Google Search Console

GSC rich results report

The Enhancement report gives SEO professionals the opportunity to monitor and track the performance of rich results. 

Daniel Waisvery, Search Advocate at Google, shares how to monitor and optimize your performance in Google Search Console with rich results. 

Unfortunately, Google Search Console only provides monitoring for 22 out of the 32 rich result types. 

Google Search Console supports these 22 rich result types

  • Breadcrumb
  • Dataset
  • Education Q&A
  • Event
  • FAQ
  • Fact check
  • Guided recipe
  • How-to
  • Image metadata
  • Job posting
  • Learning video
  • Logo
  • Math solvers
  • Merchant listings
  • Practice problems
  • Product snippet
  • Q&A page
  • Recipe
  • Review snippet
  • Sitelinks searchbox
  • Special announcement
  • Video

6. The Knowledge Graph is a type of rich result

Spoiler alert: the Knowledge Graph is a type of rich result. 

You know, these things you see in the SERPs.

Bonus tip: Kristina Azarenko (featured in the knowledge graph above) has an epic course with copious amounts of technical SEO knowledge. I’ve taken it, and I’m a better SEO professional for it. And, no she didn’t pay me to say this. 

Just when you thought you could read another SEO article and not see “featured snippets” again, here I am showcasing featured snippets. 

Featured snippets

It looks like Google will continue to support this tactic as we’re seeing more FAQ rich results displayed in Google search

8. You can have more than one type of rich result on one page

Video rich results. Breadcrumb rich results. FAQ rich results. All great things on their own. But better together when using markup on one page. 

Can you guess how many rich results this product page from Keen Footwear has? 

Keen Footwear product page

If you answered three rich results, please pat yourself on the back (and enjoy a virtual hug from me). 

9. Rich result enhancements are ‘a thing’

Gosia Poddębniak from Onely wrote a great piece on rich results. He explains enhancements from rich results perfectly. 

Essentially, the rich result enhancements you can achieve are based on the original rich result you’re going after. 

For example, job posting structure data has multiple properties like date, description, organization, job location title, application location requirements, base salary, apply, employment type, etc.

The more properties you complete, the more enhancements you could get in Google SERPs. 

10. Rich results must be written in JSON-LD, microdata or RDFa

If you want to be eligible for Google’s rich results, your markup has to be in JSON-LD, microdata or RDFa. 

Gentlepeople, I give you: the Googleys. It’s like trying to be eligible for an Emmy, except you’re only eligible for a Google rich result. 

JSON-LD, microdata, and RDFa are linked data formats. Ironically, JSON-LD is a form of RDF syntax. 

Basically, these linked data formats help Google connect entities to other entities to help search engines better understand the context. 

11. JSON-LD is the preferred rich result format

It’s official from the Sirs at Google, JSON-LD is the recommended structured data. 

Why is JSON-LD the preferred rich result format?

It’s easy to implement and doesn’t impact page speed performance because it loads asynchronously. 


Get the daily newsletter search marketers rely on.

Processing…Please wait.


12. It does not matter where on the page JSON-LD is implemented

Unlike many SEO-related code changes, structured data format does not need to go in the <head> section of your website. 

Structured data format can be placed anywhere on your website. 

13. Use a tool to automate your markup

I’m reminiscing about the words of the legendary Oprah, “You get a car. You get a car.”  

So in honor of Oprah, this is your moment of freedom. Yes, you get a rich result tool. You don’t have to be a web developer to add markup to your website. 

If you’re using WordPress, plugins like Yoast SEO or RankMath do it for you. 

If you’re using Shopify, there are tools in the app store

If you’re on Drupal or Sitecore (or any other enterprise website custom-coded language), I’d recommend SchemaApp. 

Or, if you’re into Google Tag Manager, you can add structured data with GTM. 

Just be careful. When I spoke with Anne Berlin, Senior Technical SEO at Lumar, she shared that this can backfire on very slow sites. 

  • “Where there is an excessive amount of (particularly render-blocking third party) Javascript on a site, the schema may not be injected into the DOM before googlebot's render timeout window is hit, and it moves on from the page. So this is a commonly used method, and has multiple advantages, but important to be aware that there may periodically be detection or validation issues when this method is used on an already slow template.”

14. Always test using the rich result test tool

The rich result tool is your friend. 

At the risk of stating the obvious, this rich result test tool is – very useful.

Mostly because now you don’t have to understand what entities, predicates, or URIs are in relation to linked data formats. 

If you’re testing in a staging environment, test with the rich result test tool. 

After your webpage is live, test with the rich result test tool. 

15. If your rich result violates a quality guideline, it will not be displayed in Google SERPs

If you violate Google’s quality guidelines, the chances of your rich result appearing in the search results are about as good as Blockbuster making a comeback.

It’s not going to happen. 

16. You can receive a manual action if your rich result violates Google’s guidelines

The only worse than logging into Google Search Console to see you’ve received a manual action is the great Sriracha shortage of 2022

Repeat after me: I can receive a manual action if your page contains spammy structured data. 

One more time. 

I can’t tell you how many clients I’ve worked with that asked me to markup reviews and ratings that weren’t made by actual users. 

This is against Google’s completeness guidelines, and you will receive a manual action. 

17. Google will not show a rich result for content that is no longer relevant

If you have content that is no longer relevant, Google will not display a rich result. 

For example, if your job posting is outdated after 3 months, Google will not display a rich result. You must update the job posting. 

Or if you’re streaming live and labeling the broadcast as local events, but it’s outdated. Google will not display the rich result. 

18. If the rich result is missing required properties, it will not appear in the SERPs

There are a set of “required properties” Google must have for the rich result to appear in the search results. 

For example, if you want to markup an article page, you will need the recommended properties:

  • Author
  • Author name
  • Author URL
  • Date modified
  • Date published
  • Headline
  • Image

If Google provides recommended property options, use them. 

Lucky me, I’ve had the joy of working with Berlin, fellow SEO and plant lover, who shared her thoughts on recommended properties. 

  • “When making a play for the added SERP real estate and CTR of product rich results, it is strongly advisable to fulfill more than just the minimum required properties. With competition for online shopping clicks and the associated advertising investments heating up, product rich results have been especially volatile this year. The more properties you've populated, the more new and experimental rich results you'll qualify for as they're rolled out.

One potential pitfall – read the recommended property notes in the Google guidelines carefully. 

If you're marking up online events and just scan the list and think, eventAttendanceMode is not required, you could miss that if this property isn't defined, Google interprets the event as happening at a physical location.”

20. Adding rich results on the canonical page is not enough if you have duplicates

Google states:  

“If you have duplicate pages for the same content, we recommend placing the same structured data on all page duplicates, not just on the canonical page.”

This step often gets skipped because SEO professionals often mistake that if you have a canonical page, you’re golden. Unfortunately, simply adding a canonical tag doesn’t mean you’re done with the page. 

21. If you have a mobile and desktop version, add rich results to both versions

If you’re running an m.websitename.com and a websitename.com, you will need to add your rich results to both versions. Search engines treat these as two separate websites. 

Whatever you do to the desktop version, you have to complete it on the mobile version. 

22. There is no guarantee your page will receive rich results

Well… you did it. You added your product review structured data and tested it in the rich result tool, but nothing happens. 

The truth is there’s no guarantee Google will reward your website with rich results. 

Yes, this can result in all the feelings you felt when watching Chance and Sassy returning home at the end of Homeward Bound, waiting for Shadow to appear. 

If you’re lucky, Google may limp your rich results back home. But it’s a waiting game. 

Get rich results or die tryin’

Get rich results or die trying is a nod to the rapper 50 Cent and his relentless hustler mentality. 

When it comes to implementing rich results, you’ve got to pull your bootstraps up and get creative to showcase a rags-to-riches story of what you can do before and after rich results. 

If SAP can see more than 400% net growth in rich results organic traffic, you can get there too. 

But remember this advice from Berlin:

  • “Depending on the type of rich result being targeted, you might need to improve a site's internal linking setup or rally the organization to define its voice and mission on an exceptional About page.” 

Rich results are more than just the markup on the page.

Rich results require tact and attention to detail to reap the benefits. Just remember to pay homage to Google’s documentation shared above and “pour one out” for all rankings you’ve lost without rich results. 

The post Rich results: 22 things every SEO pro needs to know appeared first on Search Engine Land.



via Search Engine Land https://ift.tt/GfmDln5