search_engine_basics

Search Engine Basics – Climb to the top of Google

The high demand for search continues to rise because people can obtain information much quicker as opposed to the past where they had to make a trip to the closest library to learn. We live in a digital era nowadays where most shopping, banking, and other transactions are conducted online. For the website to exist nowadays, it needs to be visible on the internet.

What is organic SEO and why is it a challenge for marketers?

Organic (natural) SEO refers to appearing in the top position of Google in unpaid, algorithm-driven results. A good SEO plan involves getting to know the audience behind a particular brand, their interests and needs. What keywords (shorter or longer search phrases) are they likely to type? What is their intent? We differentiate between navigational, informational, and transactional queries.

Navigational queries: Here users have a particular website in mind and if you are not that website, they will omit you. In this example, the branded searches tend to drive most high traffic value that is likely to convert.

Informational queries: They are the hardest to target because the scope is usually broad. Informational queries can lead to conversion but do not have to. These are all the questions that users type with an intent to get as much information as they can on something that intrigues them. It is where search engines usually display blogs, how-to videos, and other guides in search results.

Transactional queries: Those queries do not necessarily involve an immediate transaction but they may lead to it. Some examples of transactional queries can be signing up for a free trial account, searching for specific products to buy or looking for restaurants, hotels, flights and similar.

Summarising, the biggest challenge for marketers is to make sure that the content on a website is optimised to match the exact needs of the specific target audience for that website.

What is SERPs? (Search Engine Results Pages)

Search engine result pages are the pages that search engines return to fulfill a query. Google, Yahoo, and Bing have one specific goal in mind and it is to deliver the most accurate results to their users. The layout of those search engines slightly differ but they all include the following sections:

Vertical navigation – it is somewhat a navigation menu where users can choose how they want to view the search results (images, news, videos, or maps).

Search query box – this is where users type their queries.

Result information – this is metadata (short introduction text underneath URL).

PPC advertising – Google AdWords or Bing adverts.

Natural/organic/algorithmic results – these are the search results pulled from the search engines based on their complex algorithms.

Apart from the above ‘standard’ results search engines also show vertical results or instant answers. I have already written a blog post that talks more about structured data and featured snippets. Getting placed in the vertical result can be a significant win for an SEO practitioner.

Google’s Knowledge Graph & Google Knowledge Graph Card/Panel – what’s the difference?

Google’s Knowledge Graph is a system that has been launched by Google in 2012 to expand the horizons of the search results. This new technology can discover the connections between different people, places, and things and display the results in entities alongside Google’s traditional search results. Google’s Knowledge Graph card or panel is an information box placed on the right-hand side after googling a specific search term. Google can understand, for example, that Marie Sklodowska-Curie (people) was a physicist born in Warsaw (places) that was married to Pierre Curie (relations) and gained her fame by discovering radium and polonium (things). In the section ‘People also search for’, Google will put people who are the relatives of Marie Sklodowska-Curie such as her husband or children or other physicists. This proves that Google can understand the further connections between people as well as things. It can see those connections thanks to its factual database which includes information about and relationships between millions of entities (entities are people, places, and things). Type ‘Marie Sklodowska-Curie in Google and check it yourself.

Crawling, Indexing and Ranking

Search engines have one crucial job. It is to provide the best and most accurate results to their users. Two factors play a significant role in determining whether a website will show up in SERPs. These are the relevance and importance. Relevance is a measure of how the content matches the intent of a user whereas the importance refers more to an authority that the website has earned online. If both are high, the chances of ranking increase significantly.

How do search engines find new websites? – Crawling

Search engines (Google, Yahoo, Bing, etc.) have automated software programs that are called crawlers or spiders that scan through trillions of documents (pages and files) on the web to search for the most relevant content to specific queries.

Please note that other countries use different search engines. The way Google or Bing operate does not necessarily apply to them. Some examples of popular search engines used in other countries are Yandex (Russia), Baidu (China), Seznam (Czech Republic), or Naver (North Korea).

Going back to the primary question. The automated program (bot) starts with crawling the already high-quality seed websites that it stores in an index (search engine’s database). It visits the links on each page of those websites to look for new web pages. What does it mean? It is simple – your website can be found through links from other sites on the internet pointing to it. Do you want to speed up the discovery process? If you want Google to find your website faster, you may want to add it to Google Search Console. This approach will help you to queue your website for the next crawl.

There are some limitations to what search engines can see on the website. They may have difficulties with reading images, videos, some JavaScript codes, or iFrames. This, however, does not mean that they will continue to have those problems in the future as there are updates taking place regularly.

A few words on robots.txt file and sitemaps

What is robots.txt file and why is it important for SEO? Here’s an answer. Before crawling any website, the automated program will look for instructions. Those instructions are in robots.txt file. Thanks that the bot knows what pages it can crawl and what pages it has no access to. I highly recommend checking Neil Patel’s blog post that tells more about how to best leverage the robots.txt file for SEO.

What is sitemap?

Sitemap in very short is a hierarchical collection of all the pages on a website. It is very useful for an automated bot because it helps it to discover all those pages faster. The sitemap should include only the important pages on the website to avoid wasting the crawl budget. Check out Backlinko’s blog post if you want to learn more about the crawl budget.

Does crawling guarantee indexing?

It’s one-third of the success knowing that a website has been crawled. I strongly believe that this is where the technical SEO plays a crucial role. Technical SEO is like the foundation that serves for all the further work such as content creation, promotion, and link building. An automated program after visiting the website will check for all sorts of things. Some of them are the content of each page, navigation links, the value of the content, the length of the content, videos, images, and more. Although limited in its ability, it will try to identify if the particular website is a good match for the given query.

Below are some questions that you can run through to see if an automated bot is likely to consider your website as a valuable source of information:

Is my website architecture clear?
Are the URLs on my website readable?
Are my images optimised? Do they have an alt tag?
Is my code clean and readable to search engines? 
What is my website about? Do I write blogs that are relevant to my business? Are they helpful to users? 
Is my content high quality? Do I have decent grammar?
Do I have internal links on the website for better navigation and user experience?
Is my content unique?
Do I use headings and subheadings to provide better structure to my blogs?

Ranking – the final part

Many ranking factors dictate whether a website will appear in search results. Mobile-friendliness, loading speed, domain age, technical SEO, links, and optimised content are the most important indicators of success. There are plenty of online resources that can help you to start off with SEO. It does not necessarily mean that you need to have all those things right to appear in SERPs. Sometimes a few changes such as implementing structured data, writing about a unique but highly demanded subject, or simply improving overall user experience can bring unexpectedly positive results.

CONCLUSION:

We live in a digital era where access to knowledge and information is easier and faster than in the past. Entrepreneurs need to exist online if they want to be noticed by their customers nowadays. This means they require a website that is well-built and optimised for SEO. Putting a website live is not enough. SEO is the key component of the long term success and it is not a one time project or a job. It is a continuous effort that requires knowledge, practice, and monitoring. This blog post was inspired by the ‘Search Engine Basics’ chapter in ‘The Art of SEO Mastering Search Engine Optimization’ book by Eric Enge, Stephan Spencer, and Jessie C. Stricchiola (3rd edition).

Leave a Reply

Your email address will not be published. Required fields are marked *