Recognition and interpretation of facial expressions is a vital task in human-to-human communication. Accessing this channel of communication would open up a wide range of possibilities in human-computer-interaction, health-care, education, the entertainment industry and other areas. An effective expression recognition system depends on three high-level parts: Effective machine learning algorithms, robust facial representation and an encompassing ground truth to train the classifiers. The latter is addressed by developing a new image database compiled from manually labelled web images. The database contains a large number of male and female subjects of different age groups and ethnicities performing seven basic expressions with varying head pose and under uncontrolled lighting conditions. Three facial descriptors based on the discrete cosine transform (DCT), local binary patterns (LBP) and Gabor filters are formulated in terms of regions around key points. Automatic key point selection using boosting is compared to a common block-based feature extraction method. In extensive experiments the web image database is utilized to compare AdaBoost and support vector machine (SVM) classifiers using the different facial representations. DCT and LBP features produce best results with a combination of per-expression selected key points and SVM classifiers, whereas the Gabor filter based representation yields optimal performance when the regions are placed on a regular grid. It is furthermore observed that, contrary to intuition, selection of many key points might deteriorate performance rather then improve it.