|
Only lead to significant changes in how Google works . Small modifiers in queries such as “not”, “a” or “no” may in fact have previously been “misunderstood” by Google 's algorithms , which until recently took a broad-based approach. I try to make it easy for you. For example, if, before BERT, I had wanted to include the word "SEO consultant" in a text that talks about pandas, an excellent trick would have been to do so using a negation, such as: certainly a panda will not behave like a SEO consultant! This was possible - and it worked - because Google was unable to fully elaborate and understand the topic, it limited itself to exploring individual words while ignoring the context.
Quickly said, BERT works exactly on this and other Bosnia and Herzegovina Mobile Number List aspects and from today the queries (and with them the various keyword analyses) will be evaluated in a much more detailed way, taking into consideration nuances that may have been previously ignored. Nuances that you too will necessarily have to take into account when you start your next keyword research ! I would like to underline that BERT does not replace RankBrain, but on the contrary works in parallel with it. It is in fact an add-on that can be used at the discretion of the search engine for certain questions where its "capabilities" may be more useful... and apparently it has a lot of work to do. In fact, at the moment Google states that BERT affects as many as 10% of searches done on the US sample, which as you will understand is a huge amount of stuff. But how exactly does Google BERT work? BERT represents one of the most innovative developments in the field of NLP .

In case you don't know what it is, NLP stands for Natural Language Processing and is a field of Artificial Intelligence that gives machines the ability to read, understand and derive meaning from human languages. Yes, just like in Blade Runner. But let's go back to BERT for a second. The models prior to this update, defined in fact as "traditional" - word2vec or GloVe come to mind - worked on single words, which means that for example the word "belief" would have enjoyed the same understanding in sentences such as: Your cupboard is beautiful… – referring to a piece of furniture Popular belief – referring to a widespread belief BERT solves this matter with no small amount of cunning, thanks mainly to the application of two different neural pre-training models. The first model is called Mask Language Model (MLM) and is used to predict some words and self-verify that you have actually understood what you are talking about. The way it works is simple but damn effective: the system basically tries to trick itself, hiding some words from.
|
|