How Do Search Engines Work? - JPKee.com
Advertisement
Article updated by Joel Lee on 10/10/2017
For many, Google is the internet. It’s the starting point for finding new sites, and is arguably the most important invention since the internet itself. Without search engines, new web content would be inaccessible to the masses.
But do you know how search engines work? Every search engine has three main functions: crawling (to discover content), indexing (to track and store content), and retrieval (to fetch relevant content when users query the search engine).
Crawling
Crawling is where it all begins: the acquisition of data about a website.
This involves scanning sites and collecting details about each page: titles, images, keywords, other linked pages, etc. Different crawlers may also look for different details, like page layouts, where advertisements are placed, whether links are crammed in, etc.
But how is a website crawled? An automated bot (called a “spider”) visits page after page as quickly as possible, using page links to find where to go next. Even in the earliest days, Google’s spiders could read several hundred pages per second. Nowadays, it’s in the thousands.
When a web crawler visits a page, it collects every link on the page and adds them to its list of next pages to visit. It goes to the next page in its list, collects the links on that page, and repeats. Web crawlers also revisit past pages once in a while to see if any changes happened.
This means any site that’s linked from an indexed site will eventually be crawled. Some sites are crawled more frequently, and some are crawled to greater depths, but sometimes a crawler may give up if a site’s page hierarchy is too complex.
One way to understand how a web crawler works is to build one yourself. We’ve written a tutorial on creating a basic web crawler in PHP, so check that out if you have any programming experience.
Note that pages can be marked as “noindex,” which is like asking search engines to skip its indexing. Non-indexed parts of the internet are known as the “deep web”
What Is the Deep Web? It's More Important Than You Think
What Is the Deep Web? It's More Important Than You Think
The deep web and the dark web both sound scary and nefarious, but the dangers have been overblown. Here's what they actually and how you can even access them yourself!
Read More
, and some sites, like those hosted on the TOR network, can’t be indexed by search engines. (What is TOR and onion routing?
What Is Onion Routing, Exactly? [MakeUseOf Explains]
What Is Onion Routing, Exactly? [MakeUseOf Explains]
Internet privacy. Anonymity was one of the greatest features of the Internet in its youth (or one of its worst features, depending on who you ask). Leaving aside the sorts of problems that spring forth...
Read More
)
Indexing
Indexing is when the data from a crawl is processed and placed in a database.
Imagine making a list of all the books you own, their publishers, their authors, their genres, their page counts, etc. Crawling is when you comb through each book while indexing is when you log them to your list.
Now imagine it’s not just a room full of books, but every library in the world. That’s a small-scale version of what Google does, who stores all of this data in vast data centers with thousands of petabytes worth of drives
Memory Sizes Explained - Gigabytes, Terabytes & Petabytes in Layman's Terms
Memory Sizes Explained - Gigabytes, Terabytes & Petabytes in Layman's Terms
It is easy to see that 500GB is more than 100GB. But how do different sizes compare? What is a gigabyte to a terabyte? Where does a petabyte fit in? Let's clear it up!
Read More
.
Here’s a peek inside one of Google’s search data centers:
Retrieval and Ranking
Retrieval is when the search engine processes your search query and returns the most relevant pages that match your query.
Most search engines differentiate themselves through their retrieval methods: they use different criteria to pick and choose which pages fit best with what you want to find. That’s why search results vary between Google and Bing, and why Wolfram Alpha is so uniquely useful
10 Cool Uses Of Wolfram Alpha If You Read And Write In The English Language
10 Cool Uses Of Wolfram Alpha If You Read And Write In The English Language
It took me some time to wrap my head around Wolfram Alpha and the queries it uses to spout out those results. You have to dive deep into Wolfram Alpha to really exploit it to...
Read More
.
Ranking algorithms check your search query against billions of pages to determine each one’s relevance. Companies guard their ranking algorithms as patented industry secrets due to their complexity. A better algorithm translates to a better search experience.
They also don’t want web creators to game the system and unfairly climb to the tops of search results. If the internal methodology of a search engine ever got out, all kinds of people would surely exploit that knowledge to the detriment of searchers like you and me.
Search engine exploitation is possible, of course, but isn’t so easy anymore.
Originally, search engines ranked sites by how often keywords appeared on a page, which led to “keyword stuffing” — filling pages with keyword-heavy nonsense.
Then came the concept of link importance: search engines valued sites with lots of incoming links because they interpreted site popularity as relevance. But this led to link spamming all over the web. Nowadays, search engines weight links depending on the “authority” of the linking site. Search engines put more value on links from a government agency than links from a link directory.
Today, ranking algorithms are shrouded in more mystery than ever before, and “search engine optimization”
Demystify SEO: 5 Search Engine Optimization Guides That Help You Begin
Demystify SEO: 5 Search Engine Optimization Guides That Help You Begin
Search engine mastery takes knowledge, experience, and lots of trial and error. You can begin learning the fundamentals and avoid common SEO mistakes easily with the help of many SEO guides available on the Web.
Read More
isn’t so important. Good search engine rankings now come from high-quality content and great user experiences.
What’s Next for Search Engines?
Ah, now there’s an interesting question. The answer is “semantics”: the meaning of the page’s content. You can read more about in our overview of semantic markup and its future impact
What Semantic Markup Is & How It Will Change The Internet Forever [Technology Explained]
What Semantic Markup Is & How It Will Change The Internet Forever [Technology Explained]
Read More
.
But here’s the gist of it.
Right now, you can search for “gluten-free cookies” but the results may return recipes for gluten-free cookies. Instead, you might find regular cookie recipes that say “This recipe is not gluten-free.” It has the right keywords, but the wrong meaning.
With semantics, you can search for cookie recipes and then remove certain ingredients: flour, nuts, etc. You can also narrow down results to only recipes with prep times less than 30 minutes and review scores of 4/5 or greater. That would be cool, right? That’s where we’re heading!
Still confused about how search engines work? See how Google explains the process:
If you found this interesting, you might also like to learn about how image search engines work.
Image Credit: wwwebmeister/Depositphotos
Source link
- #Buying_Guides #DIY #Hardware #Product_Reviews #Smart_Home #Technology_Explained
Author - Vikash Kumar
No comments: