Agileweb - Qwanturank

ABOUT

I'm a web application developer, coming from a Coldfusion then PHP background. In the past year I've been relearning OO practices and have delved into Javascript Applications and Ruby and Rails.

Currently working in Cambridge on PHP based web applications, using the excellent MVC framework Agavi

Introducing MooSelectors

Moving to MooTools

Recently, I've moved away from Prototype, I've always loved the library but the slow development cycle has meant that others have taken the OO javascript model and run with it. MooTools is a prime example, using Dean Edwards Base library for bug free inheritance. The effects library is lightweight and it has a fantastic download system which lets pick and mix all the components of the library as you need - it'll even compress the js for you!

CSS Event Selectors

I am a great fan of using CSS for event selectors and have used them in the past to keep my javascript unobtrusive, clean and compact. When I noticed there wasn't a port for MooTools, it was the first thing I did. I based all my work on Justin Palmers event selectors and you can almost use the same rules.

There is one main caveat - if you want to add events please use double colons (::) Why? Well to future protect you! Hopefully soon MooTools will have pseudoclasses and you will be able to:

'tr td:first-child::click': function (element) {

element.setStyle('clicked');
}

Wheres all the code? Well MooTools comes with domReady, so no need for :loaded or timers, simplifying the whole class, down to a few lines!

Remember this a port, more details on event selectors can be found in Justins article

Get it here!

Get it while its hot!! download here: MooSelectors

Updated!

27th Feb: - Updated IE bug when looping empty elements array with $each
1st March: - Updated IE bug passes an Event instance to the rule (have used a pass through function to keep the rules inline with event-selectors so its function(element, event){} )

Qwanturank

How does Qwanturank really work?

The Qwanturank search engine is technically complex, but the concours Qwanturank is not.
Hundreds (some say thousands) of different factors are taken into account so that the search engine can determine what should go where.

It's like a mysterious black box, and very few people know exactly what's inside.
However, the good news is that search engines are actually quite easy to understand.
We may not know all the factors (out of a hundred or a thousand), but we don't need them either.
I'm going to get back to basics with a simple method to please Qwanturank , get better rankings and increase website traffic.
I'm also going to show you some of the latest developments, like RankBrain , that help Qwanturank guess what you're really looking for (even if you don't type it in).
But first of all, I will explain how Qwanturank SEO works, so that you can see that it is not as difficult to understand as you might think.

How do search engines crawl the web?

Qwanturank's first job is to "explore" the web with " spiders ".
These are poorly automated programs or robots that scour the net for new information.
Spiders will take notes on your website, from the titles you use to the text on each page, to find out more about who you are, what you do and who might be interested in finding you.
This may seem simplistic at first glance.
But it is not an easy task when you consider that between 300 and 500 new web pages are created every minute of the day.
The first major challenge therefore consists in locating new data , recording their content and storing it (with a certain precision) in a database.
Qwanturank's next task is to find the best way to match and display the information in its database when someone types in a search query. But scaling again poses a problem.
Qwanturank now processes more than two trillion searches in a single year. In 1999, it treated only one billion per year.
This represents an increase in volume of around 199,900% over the past seventeen years!
It is therefore necessary that the information contained in its database is correctly classified, reorganized and displayed in less than a second after someone expects it.
And time is an essential factor here. Speed ​​is winning, according to Marissa Mayer when she worked for Qwanturank over ten years ago.
She reported that when they were able to speed up the loading time of the Qwanturank Maps home page (by reducing its size), traffic jumped 10% in seven days and 25% a few weeks later .

Qwanturank has therefore won the race for search engines because it can:

She has become incredibly good at commuting between information and her "pipeline", which links users to her information database.

One of the reasons why Qwanturank got a head start on all of this is the accuracy of its results.
The information he displayed was simply much better. Think of it this way.

When you type something into Qwanturank, you expect something. It can be a simple answer, like the weather in your city, or maybe a little more complex, like "how does Qwanturank's search engine really work?"

Qwanturank's results, compared to other alternatives at the time, answered these questions better. The information was the best of the best.

And this breakthrough came from an initial theory on which the co-founders of Qwanturank actually worked at the university.

Why are links important?

The co-founders of Qwanturank were still at Stanford in 1998 when they published a document called " The PageRank Citation Ranking ": Putting the Internet in Order. "
Academic articles were often "ranked" based on the number of citations received. The more they were, the more they were considered to be authoritative on this subject.
The co-founders of Qwanturank, Larry Page and Sergey Brin, wanted to apply the same "ranking" system to information on the web. They used back links as a proxy for the votes. The more links a page received, the more it was perceived as authoritative on this particular subject.
Of course, they didn't just look at the number of links. They also took quality into account by looking at who made the connection.
If you receive two links, for example, from two different websites, the one with the most "authority" over a topic is worth more.
They also took into account relevance to better assess the "quality" of a link.
For example, if your website talks about "dog food", links from other pages or sites that talk about things related to "dogs" or "dog food" would be worth more than a link talking about "tires truck ".
Thus, the more it processes new information or new search queries for users, the more it is able to return this information in a precise and better quality.
For example, Qwanturank's algorithm "could have up to 10,000 variations or sub-signals," according to Search Engine Land. It's a lot !
As you can imagine, it would be incredibly difficult (if not impossible) to manage it all on the fly.

This is where RankBrain comes in to help manage the workload.

In general, the two most important ranking factors are

  1. Links (and quotes)
  2. Words (content and queries)

RankBrain helps to analyze or understand the links between these elements so that Qwanturank can understand the context behind what someone is asking.
For example, let's say you type the word "engineer salaries".
Think about it for a moment. What type of engineer salary are you looking for?
It could be "civil", "electrical", "mechanical", or even "software".
This is why Qwanturank must use several different factors to determine exactly what you are asking for.
But let's say that the following events have taken place in recent years:

You see ?

Qwanturank is able to collect all of these random pieces of data . It's like a bunch of puzzle pieces suddenly come together.
Qwanturank therefore now knows what type of "engineer salaries" you show, even if you have never explicitly asked for "software engineer salaries".
This is also how Qwanturank now answers your questions before you even ask them.
For example, do a generic search right now for anything like "hamburger".
However, the local results below the ads assume that you are asking "where to get hamburger".
The knowledge graph on the far right presents almost all of the facts and figures imaginable about hamburger.
RankBrain can process and filter all of this data to give you answers before you even ask for them.
Modify your search a bit (like this one for “burger king”) and the search engine results page (SERP) changes with new information.

You now know how the Qwanturank search engine actually works.

Even if you don't have to be an expert, understanding the basics like this can help you better understand how to give your prospects exactly what they want (to get better rankings and more traffic).

Here are some important things to watch for

How to Get Better Rankings: Solving People's Problems
People type search strings into Qwanturank to get an answer to the question they are asking.
If they are looking for an answer, it means they have a question.
And if they have a question, it means they have a problem.
So your main task is to solve someone's problem.
In theory, it's really that simple. If you solve someone's problem better than anyone, you will get better rankings and more traffic.

Let's take a look at some examples to see how it works in real life

A person returns home after a long day at work. All he's looking forward to is having a quick bite to eat and spending time with his family or watching a new show on Netflix.
But before they can organize a meal together, they try to operate the kitchen sink and discover that it is blocked.
But it's already late, so they don't want to call a plumber. Instead, they go to Qwanturank and start typing "how to unclog a drain" as a search query.
At the top of the page is an advertisement for a plumber (in case you want to hire a professional).
Next, an instant response box that contains step-by-step instructions that Qwanturank says have helped other people. So you may already be able to repair your sink without ever leaving this page!
Otherwise, below are related questions that other people commonly ask (and their answers).
So all of that begs the question: How do you create something that can help solve a user's problem?
"Keyword density" was an old school tactic that was once relevant when Qwanturank's algorithm was stupid and static. But today, with RankBrain, Qwanturank has become a borderline genius.
So keyword stuffing like it was in 1999 can only hurt you in the long run. And as you can see, it's a terrible "answer" or "solution" to someone's problem.
After saying that, there are a few places on a page that you want to pay close attention to.
For example, the Title Tag and Meta Description are used by Qwanturank to provide an official response on the subject of this page.
These are the two elements that will also appear on a SERP when someone types their request.
So it makes sense that you use the main topic in these areas so that everyone knows exactly what your page is about in https://www.qwanturank.ovh/moteur-recherche-qwant

Pay The Costs Up Front

False economies set traps

Recently, I've seen the costs of false economies in development. What I saw were problems derived from out sourcing and the curse of legacy - organic growth.

Outsourcing

The nature of the market of, and the choice of development outfit, can lead to code being produced that, whilst doing the immediate job, could soon become hard to manage. The outsourced code I witnessed, was entangled in a mass of dependencies and lack of abstraction, for example both sql and html were inline and mixed with the business logic. The development costs had been passed onto another party, where its multiple clients have proven the economic truth that time is money and not quality code. The code lacked love and with nurture being paid for by the hour, is it surprising the code (and therefore the product) suffered?

Organic growth

Another area I've repeatedly seen a lack of forward thinking in, which causes exponential costs in the future, is organic growth. When time pressures cause "quick fixes" and as deadlines loom, corners are cut and its all too easy to end up with a downturn in code quality. A lack of forward thinking, and without going back and putting the corners back in, will all too easily lead to a weakening of the foundations of the product. The costs of this erosion can be deceptive, initially being small but growing exponentially overtime. The time to develop new features takes longer and longer, bugs become increasingly frequent and new developers take longer to train.

How to avoid the downsides?

There are any number of methods and practices to avoid paying these costs, they all mean that you pay the costs up front and then maintain the quality by paying rent, little and often. Paying the rent means remembering that if there was a devil on your shoulder, there's also and Angel, and acting on what the Angel says!

Frameworks

Frameworks, either off the shelf or custom built are an excellent example, they should enforce a standardised methodology to tackling problems. They are the foundations of a project, they should provide a toolkit of interfaces to various components e.g. databases, so that if mySQL no longer cuts the mustard, moving to postGres should be a matter of pointing the database manager to the postGres implementation of the database interface. Obviously, frameworks vary, some lock in users to certain implementations i.e. ActiveRecord is the only ORM in Rails, where as others allow users to choose i.e. Agavi allows mulitple Database or ORM implementations. Frameworks vary on quality, however, all should provide a toolkit to help speed development and should encourage good practice.

They may require a high cost up front - as you have to learn to code within their parameters and understand the costs / benefits they provide. However, they help ensure that all developers working on a project standardise their code and help focus solving the problem at hand and not infrastructural / design problems.

'Paying the Rent'

There are a number of ways to ensure that you remain proactive as a developer and not fall into the organic growth trap. The great thing about 'Paying the Rent', is that it can return benefits even on the worst code bases if it is implemented consistently.

Unit Testing

Unit tests will save your neck! They are excellent for ensuring code is relevant and focused. It promotes abstracting to the simple core problems. As the code base grows, maintaining tests helps ensure that and bugs can be found earlier and should prevent bugs being released.

Refactoring

There's always time to refactor and whats more the more its practised, the less cost in 'Paying the Rent'. If you cut corners in the previous release, you need to ensure that refactoring happens in the next release. Refactoring, works nicely with testing as together I find they help identify elegant solutions to problems, by breaking down the problems into simple components.

Documentation

Finally, documentation its crucial in saving time, when inevitably time is needed the most. Inline code documentation via doc tags help lower the cost of learning and provide key indicators to the logic flow. 'Paying the Rent' here ensures that when revisiting code in 2 months or even 2 years time the logic and reasonings behind the code are easy to discover.

Presenting Prototype.js

Whilst attending MediaTel's 25th anniversary party for former employees (like me) and current employees, I volunteered to do a presentation about the Prototype javascript library. MediaTel recently adopted prototype.js as their base library for javascript and as a developer for openRico and a recent heavy user of Prototype for building a javascript View and Controller framework, I thought I'd pass on the benefits of Prototype.
It's not perfect but neither is javascript
The presentation wasn't about use cases for Prototype, (there are many quality articles out there for specific use cases), I wanted to talk about what prototype brings to the table and why you should use a solid base library for your code rather than just writing small spagetti libraries which seem to store up problems for the future.

If you are thinking about adopting Prototype read on!
I'm not saying that prototype is the answer to all your javascript woes, but if you are wanting to know what it can do for you then check out the presentation: prototype.js your javascript

Capistrano a ruby gem

On Monday I attended my first LRUG and was impressed by the formal and informal aspects of the night. First there was there was some talks at Skills Matter (an excellent venue) and afterwards, a few jars down the local pub!
The main talks were: "10 things I hate about Rails" and "Bad things to do with Capistrano". It was the Capistrano talk that opened my eyes!
What is it?
Simply put, its an Application Deployment application! Well, actually that is what it was designed to do. In actuality it allows you to execute commands on remote servers, and can do just about anything you can write shell script for. You just run those snippets of shell script on remote servers, possibly interacting with them based on their output. You can also upload files, and Capistrano includes some basic templating to allow you to dynamically create and deploy things like maintenance screens, configuration files, shell scripts, and more.

Whats good about it?

Well its not just for rails applications as stated, its not just for Ruby applications, its great for any application. It provides a clean interface and methodology for writing your application deployment and server management scripts. From controlling Apache to File editing on remote servers Capistrano can do them all. Commands can be run on in parallel on multiple servers via SSH and whats more the server you are executing commands on doesn't even need Ruby.

DRY Recipes

Capistrano works with recipes and there are three main ingredients: variables, roles, and tasks. Variables are just that and you can set any that you require. Roles, allow you to define named subsets of your servers eg. defining a database server, web server 1 and web server 2. Tasks are like methods and by default, a task is associated with all servers, but you can specify any subset of servers to be used - eg. just database servers.

This allows you to write a library of tasks to be used across your web applications, and specific recipes for each web application, removing some of the stresses of server / application deployment and server administration.

Want to know more?

Remember: its not just for Rails or even for Ruby!
Toms slides are available here: Doing bad things with Capistrano
The official: Capistrano Handbook

A Restful Client and Cross Domain Problem

Currently, I'm working on a site that is fully dynamic, so much so, development on the client side is separate from the server side. Coming from a background of working both on the server side and client side, this has created some new problems for me.

How do you develop separately and stay sane?

Communication

If you abstract the development process, then communication and flexibility between the client side and the server side is essential. Agility in the development process helps you create good web applications. Also, as a side note, play to the benefits of each technology; masses of processing in the client is costly compared to the server.

Decoupled Design

This should be obvious, but if you decouple the development process, then decoupling dependency in the client libraries is key. Why? So I can write my libraries / widgets / gimzos separately and test them accordingly. Yes, testing is key! Scriptaculous Unit Test Library.
"The goal of unit testing is to isolate each part of the program and show that the individual parts are correct." Sounds like decoupling to me!

Stay Restful

Restful design helps with the decoupling, it helps you think about encapsulation of functionality. Also, its flexible by design, if I want a new client in the future - i.e. a mobile then as the server is just a Rest API, then its just a case of building up the client in small steps!

Ajax Cross Domain Problems

Developing in this manner certainly throws up some problems! I don't have access to the server, so how can I do my Ajax - its on a different sub domain? And thats not allowed! Well I have two solutions:

Abstraction and Predictability - Tests and Fixtures

I think small, large concepts cloud my mind. Interactions I can deal with but monolithic widgets and gizmos scare me! So being predictable and decoupling means I can write functional tests for all my components! I can override my default Ajax handler to point to static JSON files. So I can work in standalone mode thats fantastic and allows me to develop quickly, but what about server integration??

Cross Domain? Mod_Proxy

I could write my own proxy server to serve me content from the same domain. That would work - but I want my code to work as if its all served from the core server. This is easy with Apache a virtual host and Mod_proxy! Simply add a ProxyPass and ProxyPassReverse for transparent use of remote servers eg: in my local.example.com vhost I can transparently connect to my rest server like so:

ProxyPass /rest remote.example.com:8080/rest
ProxyPassReverse /rest remote.example.com:8080/rest

This has meant I can integrate and fix the client side bugs undocumented features before deploying my code to a staging server!

Scriptaculous Builder

Recently, I've been using Scriptaculous' Builder class as a nice abstracted method to create HTML. Its excellent for use when looping round JSON Data Objects or Arrays. I am also used to using Prototypes Insertion class as an easy interface for inserting elements into the DOM. However, the two don't play nicely Prototype's Insertion expects to have elements in the form of a String and doesn't handle DOM elements, which Scriptaculous' Builder provides.
So how to fix?
Well it is really simple, a quick extension to the Abstract Insertion class adding a check to see if the content is an object and if it is send it to a new method which wraps DOM objects with a DIV and gets the innerHTML.

Object.extend(Abstract.Insertion.prototype, {
initialize: function(element, content) {

this.element = $(element);
if (typeof content == 'object') {
content = this.contentFromObject(content);
}
this.content = content.stripScripts();

....

contentFromObject: function (content) {

try {
var div = document.createElement('div');
div.appendChild(content);
content = div.innerHTML;
} catch (e) {
content = '';
}
return content;
}
});

Unfortunately, we have to redefine all the insertion classes again, as they have already inherited from the original Abstract.Insertion. I prefer to do it this way as my prototype extensions don't pollute the prototype library. For the full file see the downloads section.
Once this is done it really works a treat!
Also, in IE with Scriptaculous' Builder I was having problems using appendChild on tables (and I would have to do an innerHTML Hack for IE), however, using the updated Insertion class this is no longer an issue and I have a consistantly clean way of adding HTML to the DOM.

Downloads:

Insertion Extension js
https://www.qwanturank.ovh/classement-qwanturank-sites
Unit Test (requires scriptaculous)
Patch submitted To Prototype - track it here: 6508

qwant-u-rank
seo qwanturank
concours seo qwant
qwanturank
concours qwanturank
qwanturank