People can usually make a good guess at a person’s age by looking at their face and assessing the smoothness or otherwise of their skin, the general condition of the skin, jowls, and other features. Face recognition software, on the other hand, can recognise a face with varying degrees of success based on the training data used by estimating age has not yet become a trivial computational matter. Part of the problem is that faces change from moment to moment as we show our emotions through laughter, frowns, sadness, disgust, and other facial expressions.
Now, a team from India, writing in the International Journal of Intelligent Systems Technologies and Applications, describes a new approach to age estimation that fuses local and global features in an image of a person’s face to look through the facial expression to estimate a person’s age.
Subhash Chand Agrawal, Anand Singh Jalal, and Rajesh Kumar Tripathi of the GLA University, Mathura, explain how they use the Viola-Jones algorithm to pick out a face from any given photograph. It then partitions the face into 16 by 16 non-overlapping blocks and applies a grey-level co-occurrence matrix to these blocks. This then allows the system to calculate four facial parts – eyes, forehead, left and right cheek – from the facial image. The algorithm then examines the detail in these blocks according to region examined and compares it with similar blocks from a training set of faces where the age of the person in the photograph was already known.
“Our experimental results show that fusion of local and global features performs better than existing approaches,” the team writes. Their tests were able to estimate a person’s age in a photo to within a mean absolute error of 6.31 years for a neutral expression and at similar values for angry. For happy, sad, disgusted, and surprised the errors were slightly higher although generally better than the state-of-the-art algorithms against which they tested their approach.
Aside from refining the system, they will also next attempt to apply it to photographs with complicated backgrounds and to faces of different ethnicities.
Agrawal, S.C., Jalal, A.S. and Tripathi, R.K. (2020) ‘Local and global features fusion to estimate expression invariant human age’, Int. J. Intelligent Systems Technologies and Applications, Vol. 19, No. 2, pp.155–171.
No comments:
Post a Comment