How can you, the consumer, trust the customer feedback posted at online shopping sites when hoping to make a purchasing decision? Conversely, as the company running the site, how can it protect its reputation from false negative feedback? Researchers in Australia hope to answer these questions with computer software that can detect false feedback and ensure the integrity of ecommerce trust management systems. They provide details in the International Journal of Trust Management in Computing and Communications.
Soon Keow Chong and Jemal Abawajy of the Parallel and Distributed Computing Lab at Deakin University, Geelong, Australia, explain that trust management is a vital component of any ecommerce site; it forms and maintains the relationships between trading partners. However, it relies on feedback proffered by the trading partners and as such is not infallible. There is always the potential for feedback to be manipulated strategically to the detriment of the site’s reputation on the small-scale and in the worst case scenario a site might undergo a “rating attack” that could cause serious damage to brand and company image.
The team has now successfully developed an algorithm that can identify and block falsified feedback being sent to a site’s trust management system and so make it more robust against rating manipulation attacks. The team points out that the algorithm can detect when an established, credible user who has built up trust on a system suddenly begins cheating or when a multitude of new users are pushing false feedback on to the site.
The team explains that the feedback verification scheme uses a clustering algorithm to group similar ratings together and define the majority rating. The trust value of the rater is based on his/her past behavior and the frequency of rating submissions. In order to determine the quality of a rating, the team uses a trust threshold which designates a minimum value required to establish the trust relationship. All ratings that fall within the majority cluster are combined with the trust value of the rater, the transaction frequency and the transaction value to determine the credibility of the ratings.
The algorithm then adds “weight” (credibility) depending on various factors: rating frequency, total submissions, low value versus high value transactions, total feedback on a given product and other parameters. It thus determines whether any given feedback falls below a set threshold for credibility and defines those that do as false and so avoids adding it to the trust management system, it also scores against the user’s individual trust value.
Chong, S.K. and Abawajy, J.H. (2015) ‘Mitigating malicious feedback attacks in trust management systems‘, Int. J. Trust Management in Computing and Communications, Vol. 3, No. 1, pp.1-18.
Original article: Finding fake feedback.
via Science Spot » Inderscience http://ift.tt/1MERBtX
No comments:
Post a Comment