5 Februari 2024
Google Bert: The Humanize of Search Engine Algorithm
Search Engine
There are many updates to the Google algorithm year after year. Currently, Google boasts the most sophisticated algorithm capable of translating human queries to accurately match the results with the user's needs. Isn't it cool? Absolutely! They named it after a puppet named "BERT," which stands for Bidirectional Encoder Representations from Transformers.

A Couple Years have passed since Google first launch their Search Engine Product globally. Years after years google came with new innovations and technology that can optimize their SERP to deliver the best and accurate result for their user. They always update the Algorithm to provide the best solutions for what their user needs.
So, How Many Major Updates on Google Algorithm? And what they contribute on Search Engine Result Page?
There are 8 major updates on Google Algorithm since 2011, which are:
1. Panda. Which targets duplication, plagiarized or thin content, and keyword stuffing.
2. Penguin. Which targets spammy website with irrelevant links, and links with too many anchor text.
3. Humming Bird. Which targets keyword stuffing using natural language processing and tackle all the low quality content.
4. Mobile. Which targets any website that lack of responsive layout on Mobile Version and since then Google ranks all the website by how their website could be responsive on a mobile device.
5. RankBrain. Including part of the humming bird, RankBrain also using the machine learning system to help Google understand the meaning behind queries and serve best-matching search results in response to those queries. Google also consider user history on search engine. So that, he can provide search result that personalized with each user needs.
6. Medic. Medic aiming for website that lack of authority on YMYL (Your Money Your Life) and E-A-T Content
7. Bert. Bert is stand for Bidirectional Encoder Representations From Transformers. How does it works? same as humming bird and RankBrain Bert also using the technology of machine learning that can understand better human queries, interpret text, identifies word by word and what they relationship. So it bring impact to Google Search Engine to more understanding in depth the nuance of queries.
8. Google Core Updates. there is less transparency on this algorithm system about what those updates are and which part of search they intended to improve. But, SEO’s Practitioner can view a little bit information about this Algorithm system in Google Core Updates website.
How does Bert work in depth?
Here’s a simplified explanation of how BERT works:
1. Transformer Architecture: BERT is based on the transformer architecture, which consists of encoder and decoder layers. However, BERT only uses the encoder part, as it’s primarily designed for tasks like sentence classification, named entity recognition, and question answering.
2. Pre-training and Fine-tuning: BERT is pre-trained on large corpora of text data using unsupervised learning. During this pre-training phase, the model learns to understand the context of words in a sentence by predicting missing words (masked language model task) and predicting the next sentence in a pair of sentences (next sentence prediction task). This pre-training phase is crucial for BERT to develop a deep understanding of language patterns.
3. Tokenization: Before feeding text into BERT, it is tokenized into individual words or subwords. BERT uses a technique called WordPiece tokenization, which breaks down words into smaller subwords or characters, allowing the model to handle out-of-vocabulary words and improve its performance on rare or unseen words.
4. Embedding Layer: Each tokenized word is converted into a vector representation called word embeddings. BERT uses token embeddings (position-independent) and segment embeddings (to distinguish between sentences in a pair) to represent each token.
5. Transformer Encoder Layers: BERT consists of multiple transformer encoder layers. Each layer contains self-attention mechanisms and feed-forward neural networks.
– Self-Attention: This mechanism allows BERT to consider the context of each word in relation to all other words in the sentence, capturing dependencies and relationships between words. It helps BERT understand which words are relevant to each other within the context of a sentence.
– Feed-Forward Neural Networks: After self-attention, the output is passed through a feed-forward neural network, which consists of multiple fully connected layers with activation functions like ReLU (Rectified Linear Unit).
6. Output Layers: BERT outputs contextualized representations for each token in the input sequence. These representations capture the meaning and context of each word in the sentence.
7. Fine-tuning: After pre-training, BERT can be fine-tuned on specific downstream NLP tasks by adding task-specific output layers (e.g., softmax layer for classification tasks) and fine-tuning the entire model on a smaller dataset related to the task at hand. Fine-tuning adjusts the parameters of BERT to better suit the specific task, resulting in improved performance.
In summary, BERT leverages transformer architecture, pre-training on large text corpora, tokenization, and self-attention mechanisms to understand the context of words and sentences, making it highly effective for various NLP tasks.
Recent Post
Newsletter
Let's Work Together
I’m passionately in work with others. So, if you find any obstacle on your work. Just contact me in the description link below!