Compiling a semantic core with detailed instructions. A complete guide to selecting a semantic core. What then should be excluded? Let's figure it out

If you know the pain of search engines’ “dislike” for the pages of your online store, read this article. I will talk about the path to increasing the visibility of a site, or more precisely, about its first stage - collecting keywords and compiling a semantic core. About the algorithm for its creation and the tools that are used.

Why create a semantic core?

To increase the visibility of site pages. Make sure that Yandex and Google search robots begin to find pages of your site based on user requests. Of course, collecting keywords (compiling semantics) is the first step towards this goal. Next, a conditional “skeleton” is sketched out to distribute keywords across different landing pages. And then articles/meta tags are written and implemented.

By the way, on the Internet you can find many definitions of the semantic core.

1. “The semantic core is an ordered set of search words, their morphological forms and phrases that most accurately characterize the type of activity, product or service offered by the site.” Wikipedia.

To collect competitor semantics in Serpstat, enter one of the key queries, select a region, click “Search” and go to the “Key phrase analysis” category. Then select “SEO Analysis” and click “Phrase Selection”. Export results:

2.3. We use Key Collector/Slovoeb to create a semantic core

If you need to create a semantic core for a large online store, you cannot do without Key Collector. But if you are a beginner, then it is more convenient to use a free tool - Sloboeb (don’t let this name scare you). Download the program, and in the Yandex.Direct settings, specify the login and password for your Yandex.Mail:
Create a new project. In the “Data” tab, select the “Add phrases” function. Select your region and enter the requests you received earlier:
Advice: create a separate project for each new domain, and create a separate group for each category/landing page. For example: Now collect semantics from Yandex.Wordstat. Open the “Data collection” tab – “Batch collection of words from the left column of Yandex.Wordstat”. In the window that opens, select the checkbox “Do not add phrases if they are already in any other groups.” Enter a few of the most popular (high-frequency) phrases among users and click “Start collecting”:

By the way, for large projects in Key Collector you can collect statistics from competitor analysis services SEMrush, SpyWords, Serpstat (ex. Prodvigator) and other additional sources.

Greetings, dear reader of the web-revenue blog!

Today I decided to tell you about the basics of SEO promotion, namely compiling the semantic core of the site (SY).

Semantic Core is a library of search words or phrases and their morphological forms that most accurately characterize the activities of the site, as well as the goods or services offered by the site. Roughly speaking, the compilation of NL is the compilation of the link structure of the site’s target queries for which it is planned to promote the site!

Why is the semantic core of a website created?

1.SYA forms the theme of the site, which will be taken into account by search engines.

2. A correctly formed syntax is the basis for the optimal structure of a web resource.

3. In order to link each page of the site with semantic information to a specific part of the language (keywords).

4. To form a limited set of keywords in order to rationally allocate resources for website promotion for specific keywords (words).

5. To estimate the cost of website promotion in search engines (search engines)

Basic Concepts

Before we begin compiling a semantic core, let’s look at a few basic concepts.

1. All queries that users enter into search engines can be divided into:

High frequency (HF)

Mid frequency (MF)

Low frequency (LF)

How to find out which group this or that request belongs to, you ask. In general, there are no strict boundaries and boundaries that separate high-frequency from mid-frequency, and mid-frequency from low-frequency requests. This largely depends on the theme of the site. If we take average values, then we will consider low-frequency queries that are received up to 450-700 times a month; medium frequency - up to 1.2 - 2 thousand times a month; high frequency – over 2 thousand times a month.

Many webmasters recommend starting to promote sites, first for low-frequency and medium-frequency queries, this is on the one hand correct, but there is one thing: some low-frequency and mid-frequency queries have such high competition that it will be no easier to promote for such queries, What would you use to promote high frequencies?

So, when compiling a website’s SYNOPSIS, you should not rely only on the frequency of words; you also need to determine how difficult it will be to compete for a given request.

Therefore, we will introduce 3 more groups of queries:

Highly competitive (VC);

Moderately competitive (SC);

Low competitive (LC);

Many people consider VC to be highly competitive, SC – moderately competitive, and NC – low competitive. However, this is not always the case. Nowadays, in many areas, low-frequency queries have become so in demand that it is better not to try to get to the TOP using them. Sometimes it’s easier to reach the top in midrange frequencies (but this is also rare). Sometimes you need to take into account words that people often misspell (for example: Volkswagen it can be spelled Volcwagen or Volswagen) or words that a person types without forgetting to change the keyboard layout - “cjplfnm uba fybvfwb” instead of “create a GIF animation”. Such mistakes in words can also be used to promote a website well!

And three more important concepts:

Primary queries are queries that characterize the resource “in general” and are the most general in the subject of the site. For example, the primary requests for my website are: website creation, website promotion, website promotion, making money on the website, etc.

The main ones are queries included in the list of the semantic core, those for which promotion is advisable. For my blog: how to create a website, how to promote a website, making money on a website, etc.

Auxiliary (associative) - queries that were also typed by people entering the main queries. They are usually similar to the main queries. For example, for the query SEMANTIC CORE, internal optimization, website promotion, SEO will be associative.

I have explained the basic theory, now we can move on to the basics of compiling a semantic core:

1. If you are composing a SL for your website, then first sit down and think about what your website is about, what queries a person can use to find it, try to come up with as many keywords (sentences) for your topic as possible and write them down in a text document. For example, if you are going to make a website about various drinks, cocktails, etc., after a little thought you can write down something like the following words: soft drinks, cocktail recipes, making cocktails, fruit drinks..., etc.

And if you are doing this for any client, then we find out from the client a list of words by which he wants to promote his site.

2. We analyze the sites of competitors from the top 10 (we look at what queries they are promoted for and receive most of the traffic)

3. We use the client’s price list (name of goods, services, etc.)

4. We try to find synonyms for keywords (hard drive - hard drive - HDD)

5. Collection of keywords that are suitable for your personal blog, Internet resource or business. Here you can use wordstat search query statistics, or it is better to use special software such as Key Collector.

6. Traffic counting for selected search queries. For this, you can also use a key collector or link aggregators: seopult or webeffector.

7. Removing dummy requests. These are search queries for which the impression values ​​are greatly inflated or even inflated. You won’t get visitors from dummy requests.

8. Removing keywords with a very high promotion budget. You can again find out the approximate budget: seopult or webeffector. You can also filter out highly competitive queries.

Then we distribute them throughout the site.

The general scheme for compiling a strategic language looks something like this:

As a result, we will receive a list of keywords for our site. That's basically the whole scheme. It is not that complicated, but it is quite labor-intensive and takes quite a lot of time. But as I wrote above, this is the basis of the site, which is worth paying close attention to.

Mistakes that are usually made when compiling SYNOPSIS:

When selecting keywords, try to avoid the following problems:

SYNOPSIS should not consist of general phrases that poorly characterize your site or, conversely, too narrow. For example, if a visitor wants to learn about creating a vertical drop-down menu in Wordpress, then he will type “creating a vertical drop-down menu in Wordpress”, and not “creating a website”, “creating a site menu”, “web site”, etc. In general, you should cover more specific queries. On the contrary, too narrow queries will not give you enough visitors. Try to find a middle ground.

If you have little text, you shouldn’t put a lot of keywords on it. For example, this article is tailored to 3 keys! But the volume of text is quite substantial - more than 6 thousand characters. Ideally, there should be 1 key per article. But you can use the following rule: one or two keywords per 2 thousand characters of an article.

When creating the site's language, misspelled words that users accidentally make when typing are not taken into account. I spoke about them above.

Well, I think the theory is enough for you, and in the next article we will talk about!

Considering the constant struggle of search engines with various link factors, the correct structure of the site is increasingly coming to the fore when carrying out search engine optimization of the site.

One of the main keys for competent development of the site structure is the most detailed elaboration of the semantic core.

At the moment, there are quite a large number of general instructions on how to make a semantic core, so in this material, we tried to give more details on exactly how to do it and how to do it with minimal time.

We have prepared a guide that answers step by step the question of how to create the semantic core of a website. With specific examples and instructions. By using them, you will be able to independently create semantic cores for promoted projects.

Since this post is quite practical, a lot of different work will be done through Key Collector, since it saves quite a lot of time when working with the semantic core.

1. Formation of generating phrases for collection

Expanding phrases for parsing one group

For each group of queries, it is highly advisable to immediately expand it with synonyms and other wording.

For example, let’s take the request “swimwear” and get various other reformulations using the following services.

Wordstat.Yandex - right column

As a result, for the initially specified phrase, we can still receive 1-5 other different reformulations for which we will then need to collect queries within one group of queries.

2. Collecting search queries from various sources

After we have identified all the phrases within one group, we move on to collecting data from various sources.

The optimal set of parsing sources to obtain the highest quality output data for RuNet This:

● Wordstat.Yandex - left column

● Yandex + Google search suggestions (with search by endings and substitution of letters before a given phrase)

Clue : if you do not use a proxy in your work, then in order to prevent your IP from being banned by search engines, it is advisable to use the following delays between requests:

● In addition, it is also advisable to manually import data from the Prodvigator database.

For bourgeoisie we use the same thing, except for data from Wordstat.Yandex and data from Yandex PS search suggestions:

● Google search suggestions (using endings and substituting letters before a given phrase)

● SEMrush - corresponding regional database

● similarly, we use import from Prodvigator’s database.

In addition, if your site already collects search traffic, then for a general analysis of search queries in your topic, it is advisable to download all phrases from Yandex.Metrika and Google Analytics:

And for a specific analysis of the desired group of queries, you can use filters and regular expressions to isolate those queries that are needed for the analysis of a specific group of queries.

3. Cleaning queries

After all queries have been collected, it is necessary to carry out preliminary cleaning of the resulting semantic core.

Cleaning with ready-made lists of stop words

To do this, it is advisable to immediately use ready-made lists of stop words, both general and specific to your topic.

For example, for commercial topics such phrases would be:

● free, download, …

● abstracts, Wikipedia, wiki, ...

● used, old, …

● job, profession, vacancies, …

● dream book, dream, ...

● and others of this kind.

In addition, we immediately clean from all cities of Russia, Ukraine, Belarus, ....

After we have loaded the entire list of our stop words, we select the option for the type of occurrence search “independent of the word form of the stop word” and click “Mark phrases in the table”:

This way we remove obvious phrases with negative words.

After we have cleared away obvious stop words, we then need to review the semantic core manually.

1. One of the quick ways is this: when we come across a phrase with obvious words that are not suitable for us, for example, a brand that we do not sell, then we

● opposite such a phrase, click on the indicated icon on the left,

● choose stop words,

● select a list (it is advisable to create a separate list and name it accordingly),

● immediately, if necessary, you can select all phrases that contain the specified stop words,

● add to the list of stop words

2. The second way to quickly identify stop words is to use the “Group Analysis” functionality, when we group phrases by words that are included in these phrases:

Ideally, in order not to repeatedly return to certain stop words, it is advisable to include all marked words in a specific list of stop words.

As a result, we will get a list of words to send to the stop word list:

But, it is advisable to quickly look at this list so that ambiguous stop words do not appear there.

This way you can quickly go through the main stop words and remove phrases that contain these stop words.

Cleaning up hidden duplicates

● sort by descending frequency for this column

As a result, we leave only the most frequent phrases in such subgroups, and delete everything else.

Cleaning phrases that do not carry much meaning

In addition to the above-mentioned word cleaning, you can also remove phrases that do not carry much semantic meaning and will not particularly affect the search for groups of phrases for creating separate landing pages.

For example, for online stores, you can remove phrases that contain the following keywords:

● buy,

● sale,

● online store, … .

To do this, we create another list in Stop Words and add these words to this list, mark them and remove them from the list.

4. Grouping requests

After we have cleared out the most obvious garbage and inappropriate phrases, we can then begin grouping queries.

This can be done manually, or you can use some help from search engines.

We collect results for the desired search engine

In theory, it is better to collect by the desired region in Google PS

● Google understands semantics quite well

● it is easier to collect, it does not ban various proxies

Nuances: even for Ukrainian projects, it is better to collect results from google.ru, since the sites there are better structured, therefore, we will get much better results for landing pages.

Such data can be collected

● and with the help of other tools.

If you have a lot of phrases, then search engine results will obviously need proxies to collect data. The combination of A-Parser + proxies (both paid and free) shows the optimal speed of collection and operation.

After we have collected the search results, we now group the requests. If you have collected data in Key Collector, then you can further group phrases directly in it:

We don’t really like how KC does it, so we have our own developments that allow us to get much better results.

As a result, with the help of such grouping we are able to quickly combine requests with different formulations, but with the same user problem:

As a result, this leads to good time savings for the final processing of the semantic core.

If you do not have the opportunity to collect results yourself using a proxy, then you can use various services:

They will help you quickly group queries.

After such clustering based on search data, in any case, it is necessary to conduct a further detailed analysis of each group and combine those that are similar in meaning.

For example, such groups of requests ultimately need to be combined onto one page of the site

The most important: Each individual page on the site should meet one user need.

After processing the output semantics in this way, we should get the most detailed structure of the site:

● information requests

For example, in the case of swimsuits, we can create the following site structure:

which will contain their title, description, text (if necessary) and products/services/content.

As a result, after we have already ungrouped all the queries in detail, we can begin to collect in detail all the key queries within one group.

To quickly collect phrases in Key Collector we:

● we select the main generating phrases for each group

● go, for example, to parsing hints

● select distribute into groups

● select from the drop-down list “Copy phrases from” Yandex.Wordstat

● click Copy

● and begin collecting data from another source, but for the same distributed phrases within groups.

Eventually

Let's look at the numbers now.

For the topic “swimwear”, we initially collected more than 100,000 different queries from all sources.

At the query cleaning stage, we managed to reduce the number of phrases by 40%.

After that, we collected the frequency for Google AdWords and left for analysis only those with a frequency greater than 0.

After that, we grouped queries based on Google PS results and we managed to get about 500 groups of queries within which we had already carried out a detailed analysis.

Conclusion

We hope that this guide will help you collect semantic cores for your sites much faster and better and will step by step answer the question of how to assemble a semantic core for a site.

Good luck collecting semantic cores, and as a result, quality traffic to your sites. If you have any questions, we will be happy to answer them in the comments.

(78 ratings, average: 4,90 out of 5)

Greetings, my dear readers!

I am sure that many of you have not only heard, but even have no idea that there is such a thing as a semantic core! And what is this you ask? – I’ll try to explain it to you in simple words. The semantic core is a set of keywords, phrases, simple sentences, phrases, etc., which are returned by a search engine (hereinafter referred to as the SE) when you enter a query in the browser line.

Why do you need a semantic core? The semantic core of a website is the basis for promotion and promotion; it is necessary for internal optimization. Without a semantic core, promoting your project (website) will not be effective. The more competently the semantic core of the site is compiled, the less money you will need to successfully promote it. Nothing is clear yet, right? Don’t be alarmed, I’ll try to break everything down in as much detail as possible. Read carefully and you will understand everything!

How to compose a semantic core!

The first thing you need to do after you have decided on the topic of your blog is to create a semantic core. To do this, you need to take a notebook and pen and write down all the words, phrases, sentences that characterize the topic of your blog. Each word, phrase or sentence is essentially a future title for your posts, and the more words you come up with, the more choices you will have when writing articles in the future.

And to create a fairly solid list (200-300 words) it will take you a lot of time. Therefore, for convenience, we will use special services such as Yandex wordstat, Google adwords, Rambler adstat; they will greatly simplify the task for us. Of course, it would be possible to get by with only Yandex and Google, because... these are giants in searching for key queries compared to Rambler, but statistics show that 5-6% of people still use Rambler as a search engine, so let’s not neglect it.

To make it much easier for you to master the material, I will show everything with specific examples. Agree, theory is good, but when it comes to practice, many people have problems. Therefore, together we will create a semantic core so that in the future you can easily transfer the acquired knowledge and experience to the topic of your blog. Let's say the theme of your blog is “Photoshop » and everything connected with it. Therefore, as was written above, you must come up with and write down in a notebook as many words, phrases, phrases, expressions as possible - you can call them whatever you want. These are the words that characterize my blog topic about Photoshop. Of course, I will not list the entire list of words, but only a part, so that you can understand the very meaning of compiling a semantic core:

brushes for photoshop
brushes for photoshop
photoshop brushes
photoshop brushes
photoshop effects
photoshop effects
photoeffect
photoshop drawings
photoshop drawings
collage
photo collage
photomontage
photo frames
photo design

The list has been compiled. Well, let's begin. Let me make a reservation right away: your list may be much different from mine and should be much longer. I compiled this list of words for clarity, so that you would catch the very essence of how to compose a semantic core.

Keyword statistics Yandex wordstat

After your list is formed, it is necessary to weed out all the words that we do not need, for which we definitely will not promote our blog. For example, I will not promote using words such as (brushes for Photoshop torrent, brushes for Photoshop makeup), these phrases are not clear to me at all, we also filter out similar phrases such as (brushes for Photoshop for free and free brushes for Photoshop). I think the meaning of keyword selection is clear to you.

Next you see that Yandex wordstat has two columns. The column on the left shows you what people were looking for when they typed into the search bar, in our case the phrase “brushes for Photoshop.” The right column shows what else people were looking for when they searched for the phrase “brushes for Photoshop.” I advise you not to ignore the right column, but select from it all the words that are suitable for your topic.

Okay, we've sorted that out too, let's move on. Another very important point, as you can see from the search result for “photoshop brushes” we see a huge number of 61,134 requests! But this does not mean that the phrase “brushes for Photoshop” was typed into the Yandex search bar so many times a month. Yandex wordstat is designed in such a way that if you type in the phrase “brushes for Photoshop”, it will give you the number of queries, which will mean how many times people searched for any word forms (photoshop brushes, photoshop brushes A, cyst b Photoshop, etc.), phrases (sentences) (free brushes for Photoshop, download free Photoshop brushes, etc.), which contain the phrase “brushes for Photoshop.” I think this is also understandable.

In order for Yandex wordstat to give us a (relatively) accurate number of requests, there are special operators such as (“”, “!”). If you enter the phrase “brushes for Photoshop” in quotation marks, you will see a completely different number, which shows you how many times people searched for the phrase “brushes for Photoshop” in different word forms (brushes for Photoshop A etc.).

When you enter the phrase “!brushes!for!Photoshop” in quotes and with an exclamation mark, we will get the exact number of requests for “!brushes!for!Photoshop” in the form as it is, i.e. without any declensions, word forms and phrases. I think you understood the meaning, I chewed it as best I could.

So, after you have created an impressive list in Excel, you need to apply the “!” operator to each word (phrase). When you are done, you will have a list with the exact number of requests/month, which will need to be adjusted again.

But more on this a little later, after we look at two other keyword selection systems (Google adwords and Rambler adstat). Since after considering them, your list of keywords will be significantly expanded.

Google adwords keyword selection

Google adwords is also used to select keywords; this is a similar service to Yandex wordstat. Let's move on here too. A window for selecting Google adwords keywords will open in front of us. In the same way, enter the first phrase from our list “brushes for Photoshop” into the search bar. Please note that in Google adwords there are no operators, but simply check the box next to the word [Exact] in the “Match Types” column. As we can see, the number of requests/month in Google adwords is significantly different from Yandex wordstat. This suggests that more people still use the Yandex search engine. But if you look through the entire list, you can find those keywords that Yandex wordstat does not show at all.

Also in Google adwords you can find out a lot of other interesting things (for example, approximate cost per click), which should also be taken into account when selecting keywords. The higher the cost per click, the more competitive the request. I won’t go into detail here, the principle of selecting keywords is similar to Yandex wordstat and with a little digging, you can figure it out yourself. Go ahead.

Statistics on search queries Rambler adstat

As I mentioned above, Rambler adstat is much inferior to the two previous services, but you can still glean some information from it. Let’s go the same way and enter the first phrase from our list “brushes for Photoshop” in the search bar. I don’t think it’s worth going into detail here either. I repeat once again the principle of selecting keywords for all three systems is similar.

We have become acquainted with three services for selecting keywords. As a result, you have a huge list formed from all three services, in which you have already made a selection for those requests for which you do not plan to promote and requests for duplicates. I already wrote about this above. But this is only halfway in compiling a semantic core. Your brain is probably already racing, but in fact, if you delve into it and understand it, there is nothing complicated here. Believe me, it is better to correctly compose the semantic core once than to have to correct everything in the future. And fixing it is much more difficult than doing everything from scratch. So be patient and move on.

HF, MF and LF requests or VChZ, SChZ and LF

When compiling a semantic core, there are also such concepts as high-frequency, mid-frequency and low-frequency queries, or they are also called high-frequency, mid-frequency and low-frequency queries; high-frequency, mid-frequency and low-frequency queries can also be found. These are the queries that people enter into search engines. The more people enter the same query into the search bar, the higher the frequency of the query (high frequency query), the same with mid and low frequency queries. I hope this is also clear.

Now remember one very important point. At the initial stage of blog development, it should be promoted only for low-frequency queries; sometimes mid-range queries are also used, this will depend on the competitiveness of the query. For HF requests you are unlikely to be able to, you simply don’t have enough money for it. Don’t be alarmed by low-frequency requests, reaching the TOP is possible without investing money. You most likely have a question: what requests are considered HF requests, MF requests and LF requests?

I don’t think anyone can give an exact answer here! It will be different for blogs on different topics. There are very popular topics in which the exact number of requests (“!”) reaches 20 thousand impressions/month or more (for example, “!Photoshop tutorials”), and there are less popular ones in which the exact number of requests does not even reach 2000 impressions/month (for example, “!English!lessons”).

In this case, I adhere to a simple formula that I calculated for myself; I will demonstrate it using the example of “!lessons! Photoshop”:

VK, SK and NK requests VKZ, SKZ and NKZ

In addition to HF, MF and LF requests, there is another category. These are highly competitive (HC), moderately competitive (SC) and low competitive (LC) requests; VKZ, SKZ and NKZ can also be found. In this case, we will need to determine the competitiveness of those requests for which we plan to move to the TOP, but this will be a separate post on the topic “”. . For now, let's assume that HF ​​requests are VC requests, MF are SK and LF are NK. In most cases, this formula works, but there are exceptions, when, for example, low-frequency requests are highly competitive (HC) and, conversely, high-frequency requests are NC. It all depends on the topic of the blog.

Scheme for compiling a semantic core

For clarity, let's look at a schematic example of a semantic core. This is roughly what a standard semantic core diagram should look like.

But you shouldn’t get too attached to this scheme, because... It may change as you blog. At the initial stage, you may have, say, only four categories containing three low-frequency queries, but over time everything may change.

Most of you will say that nothing is clear, especially those who are encountering the semantic core for the first time. It’s okay, I also didn’t understand many things at first until I studied the topic very well. I don’t want to say that I’m a pro in this topic, but I’ve learned a lot. And, as promised, let’s look at everything using a specific example and according to our topic.

I want to say right away that I am not an expert in Photoshop, this topic just came to mind when writing the post. Therefore, I selected queries according to their meaning. Okay, here’s the semantic core diagram I came up with on the “Photoshop” topic. You should end up with something like:

Types of requests

All queries (our keywords) can be divided into three categories:

  • Primary requests– these are those requests that, in one or two words, can give a general definition of your resource or part of it. Primary queries that most cover the general topic of your blog are best left on the main page. In our case, these are: Photoshop lessons, Photoshop effects, how to make a photo collage.
    Primary queries, which cover less the general topic of your blog, but most accurately characterize some part of it, are recommended to be used as separate sections of your blog. In our case, these are: Photoshop brushes, Photoshop frames, Photoshop templates, photo design.
  • Basic queries- these are those queries that quite accurately define the topic of your project and are able to provide useful information to the reader, teach him what he wants, or answer the frequently asked question HOW??? That is, in our case this is: how to add brushes in Photoshop, how to make a template in Photoshop, how to make a photo collage in Photoshop, etc. The main queries, in fact, should be the headings of our future articles.
  • Additional (auxiliary) queries or they are also called associative- these are the queries that people also entered into the search bar of the browser when searching for the main query. Those. these are key phrases that are part of the main query. They will, as it were, complement the main request and serve as keywords when promoting it to the TOP. For example: photoshop for beginners online, photoshop to remove red eye, collage of several photos. I think this is understandable.

Strategy for compiling a semantic core

Now we need to break the entire list into pages. Those. You need to select primary queries from all your keywords, which will be the headings of your blog and make separate tabs in Excel. Next, select the main and auxiliary queries related to them and place them on different pages in the excel document you created (i.e., by categories). Here's what I got:

As I already wrote above: at the initial stage it is worth promoting your blog using low frequency or low frequency queries. But what to do with MF (SC) and HF (VK) requests, you ask? Let me explain.

You are unlikely to be able to advance using high-frequency (VK) requests, so you can delete them, but it is recommended to leave one or two high-frequency (VK) requests for the main page. Let me make a reservation right away: you don’t need to rush to the most high-frequency request, such as “photoshop,” for which the exact number of impressions/month is 163,384. For example, you want to use your blog to teach people how to use Photoshop. So take as a basis the high-frequency request - “Photoshop lessons”, whose exact number of impressions/month is 7110. This request more characterizes your topic and it will be easier for you to advance on it.

But SCH (SC) queries can be placed on a separate page in Excel. As your blog rises in the eyes of the PS, they (SC (SC) requests) will gradually become in demand.

I know that beginners now don’t even understand what I’m talking about, I advise you to read an article about it, after studying which everything will become clear to you.

Conclusion

That's probably all there is to it. Of course, there are programs that will help you compiling a semantic core both paid (Key Kollektor) and free (Slovoeb, Slovoder), but I will not write about them in this post. Perhaps someday I’ll write a separate article about them. But they will only select keywords for you, and you will have to distribute them across categories and posts yourself.

How do you create a semantic core? Or maybe you don’t compose it at all? What programs and services do you use when compiling? I'll be glad to hear your answers in the comments!

And finally, watch this interesting video.

Quick navigation on this page:

Like almost all other webmasters, I compile a semantic core using the KeyCollector program - this is by far the best program for compiling a semantic core. How to use it is a topic for a separate article, although the Internet is full of information on this matter - I recommend, for example, the manual from Dmitry Sidash (sidash.ru).

Since the question was asked about an example of compiling a core, I will give an example.

List of keys

Let's say our site is dedicated to British cats. I enter the phrase “British cat” into the “List of phrases” and click on the “Parse” button.

I get a long list of phrases that will begin with the following phrases (the phrase and particulars are given):

British cats 75553 British cats photo 12421 British fold cat 7273 British cat nursery 5545 British breed cats 4763 British shorthair cat 3571 colors of British cats 3474 British cats price 2461 blue British cat 2302 British fold cat photo 2224 mating of British cats 1888 British cats character 1394 I will buy a British cat cat 1179 British cats buy 1179 long-haired British cat 1083 pregnancy of a British cat 974 British chinchilla cat 969 cats of the British breed photo 953 nursery of British cats Moscow 886 color of British cats photo 882 British cats care 855 British shorthair cat photo 840 Scottish and British cats 763 names of British cats 762 British blue cat photo 723 British blue cat photo 723 British black cat 699 what to feed British cats 678

The list itself is much longer; I have only given the beginning.

Key grouping

Based on this list, on my website there will be articles about types of cats (loose-eared, blue, short-haired, long-haired), there will be an article about the pregnancy of these animals, about what to feed them, about names, and so on on the list.

For each article, one main such request is taken (= topic of the article). However, the article is not limited to just one query - it also adds other relevant queries, as well as different variations and word forms of the main query, which can be found in Key Collector below the list.

For example, with the word “fold-eared” there are the following keys:

British fold cat 7273 British fold cat photo 2224 British fold cat price 513 cat breed British fold 418 British blue fold cat 224 Scottish fold and British cats 190 British fold cats photo 169 British fold cat photo price 160 british fold cat buy 156 british fold blue cat photo 129 British Fold cats character 112 British Fold cat care 112 mating of British Fold cats 98 British shorthair Fold cat 83 color of British Fold cats 79

To avoid overspam (and overspam can also occur due to the combination of using too many keys in the text, in the title, in, etc.), I would not take them all with the inclusion of the main query, but individual words from them make sense use in the article (photo, buy, character, care, etc.) so that the article is better ranked for a large number of low-frequency queries.

Thus, under the article about fold-eared cats, we will form a group of keywords that we will use in the article. Groups of keywords for other articles will be formed in the same way - this is the answer to the question of how to create the semantic core of the site.

Frequency and competition

There is also an important point related to the exact frequency and competition - they must be collected in Key Collector. To do this, you need to tick all requests and on the “Yandex.Wordstat Frequencies” tab click the “Collect frequencies “!” — the exact frequency of each phrase will be shown (i.e. with exactly this word order and in this case), this is a much more accurate indicator than the overall frequency.

To check the competition in the same Key Collector, you need to click the “Get data for Yandex” (or for Google), then click “Calculate KEI using available data.” As a result, the program will collect how many main pages for a given request are in the TOP 10 (the more, the harder it is to get there) and how many pages in the TOP 10 contain such a title (similarly, the more, the harder it is to break into the top).

Next we need to act based on what our strategy is. If we want to create a comprehensive site about cats, then the exact frequency and competition are not so important to us. If we only need to publish a few articles, then we take requests that have the highest frequency and at the same time the lowest competition, and write articles based on them.

Loading...Loading...