the purpose of semantic search is to go beyond the ‘static’ dictionary meaning of a word or phrase to understand the intent of a searcher’s query within a specific context. By learning from past results and creating links between entities, a search engine might then be able to deduce the answer to a searcher’s query, rather than provide ten blue links that may or may not provide the correct answer.
2. How to optimise for Semantic Search Results ?
• Step 1:
• Step 2:
• Step 3
• Step 4
• Step 5
• Step 6
3. people now hardly
has to go to 2nd page
keep on
tweaking changes and conform
give you back exactly what
you want.
howsearchworks
SEARCH
From Algorithms to Answers.
4. SEARCH
From Algorithms to Answers.
CRAWLING AND INDEXING – it all starts even before you
search for a query in search engine
ALGORITHMS get to work looking for clues to better
understand what the search query means.
RANK the results, the most relevant, the most qualitative
result.
FIGHTING SPAM – Google come up with over 500
algorithm changes every year!Almost 2 changes everyday!
5. • A web search engine
• PageRank is an algorithm
Lesson 3: How search works?
In this lesson we will have a look on how search engines work. How a simple search keyword fetches you 100s of relevant results in a fraction of a second.
In this particular course we are learning how to rank a website in the first page of search results organically.
Suppose you are selling shoes online. When I search for buy shoes for men keyword in Google search box, how does google ensure which websites to show me ? There are 1000s of sites selling shoes. There comes the role of SEO. Search engines try to show the most relevant and helpful result to its search queries.
So to do SEO efficiently you need to abide by and conform to all the factors a search engine looks in an website before ranking and displaying it against search queries. And to do so you need to understand how exactly these search works. How google is capable of displaying you the most helpful answers to your search queries.
The whole process of search and display of answers in a search engines works in 4 major stages.
Crawling is where it all begins – Crawling is basically the acquisition of data about a website. This involves scanning the site and getting a complete list of your web content – the page title, the images, your focussed keywords, outbound and inbound links from your website and so on..
Whenever you search for something on internet(web), search engines (eg:Google) send out bots which are kind of algo--rithmic software to browse through its huge index to search for the most relevant result as per your query. This process is called Crawling.
Google use software known as “web crawlers” to discover publicly available webpages. The most well-known crawler is called “Googlebot.”
Now These crawlers als0 create a copy of the websites that are most relevant to specific keywords (entered by the user). It creates an index of all the web pages relevant to specific keywords to reduce the amount of time if the same query is entered again by some other user. This process of indexing sites on basis of keywords is called Indexing.
Google essentially gathers the pages during the crawl process and then creates an index. Much like the index in the back of a book, the Google index includes information about words and their locations.
Now coming to algorithm,
For a typical query, there are thousands, of webpages with helpful information. Algorithms are the computer processes and formulas that take your questions and turn them into answers. Today Google’s algorithms rely on more than 200 unique signals or “clues” that make it possible to guess what you might really be looking for. These signals include things like the terms on websites, the freshness of content, your geographic location and PageRank, the reputation of the website, social signalling ie how past webvisitors to this website info reacted.
Now in this fight of 1000s of websites to rank in the first page, some take up the bad boy route where they try to attempt unauthorised illegal or fake measures to rank up. Hese are called spam sites.
Spam sites attempt to game their way to the top of search results through techniques like repeating keywords over and over, buying third party links that pass PageRank or putting invisible text on the screen. It might have happened to you several times that you search for a certain query, click on the displayed result and got nothing relevant on the webpage. That was a trick of the site to drive traffic.
This is bad for search because relevant websites get buried and become harder to find. The good news is that Google’s algorithms can detect the vast majority of spam and demote it automatically. Google also have a separate team who manually review sites which cannot be chaecked for spam automatically.
Google keeps on changing this search algorithm almost 500 times a year so that you cannot cheat the page ranking formula. Otherwise a website ranking in the first for a certain keyword would have ranked for ever there even if its content got expired or irrelevant with time. So to keep a check on these search and ranking factors you need to stay updated about the algorithm changes. In this course I will be updating a monthly video on recent algorithm updates. Make sure you follow me and stay enrolled In the course so that you get notified when it ones.
Further I will request you to visit the resources and further study materials I have shared in the facebook group. The link is shared in the commet box below.There are few videos from google and links to google’s personal blogs where they update about these changes. Visit the group and get those.
Slide.
So let me sumup the session. The whole search process.
Process
Google navigates the web by crawling.
It follows links from page to page.
Sort the pages by their content relevancy against certain keywords.
Keep a track of all those website info saving them in index which is basically a database of google.
Process:
User types a query to search. So as you put in a search keyword
Algorithms get to work looking for clues to better understand what you mean.
Based on these clues it pulls relevant documents from the index.
Rank the results based on various factors (like: safe search, site quality, user context etc.)
And voila you have the search result.