8 April 2026

A borrower and a lender be

Peer-to-peer (P2P) lending, a form of finance that allows individuals and small businesses to borrow directly from each other through online platforms, has attracted growing academic and policy attention in recent years, especially as it reshapes traditional credit markets. An analysis in the International Journal of Accounting and Finance has looked at more than three decades of research in this area. The results suggest that while the field has expanded rapidly, there are many gaps in our understanding of P2P lending that could have implications for international financial systems.

The researchers examined more than 500 hundred scholarly articles published between 1990 and 2023. The analysis charts how interest in P2P lending has changed as financial technology, or FinTech, itself has developed over that period. By removing conventional intermediaries such as banks, these platforms not only reduce costs and accelerate loan processing but also broaden access to credit. P2P lending now serves borrowers globally who lack access to conventional financial systems. This opens up opportunities for many previously disenfranchised parts of society worldwide.

There has been a marked increase in research into P2P lending in recent years. This suggests that it is growing in complexity and economic relevance. Most of the research focuses on loan default risk and on investor behaviour, looking at the psychological factors influencing financial decisions and trust on both sides.

The emphasis on trust is central to the P2P lending model. Unlike traditional banking, where institutions act as gatekeepers and risk assessors, P2P lending relies almost entirely on digital signals of reliability and user-generated information. There are, however, geographical imbalances in the research, with most of it having been conducted in Europe and the USA, despite rapid growth of P2P lending in emerging markets. This issue suggests that our current understanding may not fully explain how these platforms operate in different regulatory environments or cultural contexts, where financial behaviour and institutional trust can be very different.

The gaps in the research limit the ability of policymakers and practitioners to design effective frameworks. The absence of regulation can expose participants to fraud or default. Nevertheless, in emerging economies, where access to traditional banking is often limited, P2P lending has the potential to expand financial inclusion by offering credit to small businesses and individuals without established credit histories.

Ritika and Khanna, A. (2025) ‘Unveiling the dynamics of peer-to-peer lending: a bibliometric analysis’, Int. J. Accounting and Finance, Vol. 12, No. 3, pp.145–184.

7 April 2026

Recommend-a-course

Research in the International Journal of Computational Systems Engineering introduces a hybrid recommendation model that could help with one of the common challenges facing universities offering online courses. How to recommend the most appropriate course for prospective students.

The approach uses Naive Bayes classification and collaborative filtering to improve accuracy and personalised course suggestions. This, the researchers suggest, could ultimately enhance the learning experience for students.

Online course recommendation systems have long struggled with issues such as the “cold start” problem, data sparsity, and inadequate personalisation. The “cold start” problem occurs when a recommendation system lacks sufficient historical data about new users or courses, making it difficult to provide relevant suggestions. Data sparsity, on the other hand, refers to the limited amount of data available for each course, which can hinder the system’s ability to capture students’ preferences. Additionally, inadequate personalisation leads to generalised recommendations that may not match the unique needs of individual students, resulting in a less effective user experience.

The hybrid model discussed in IJCSE could resolve these issues. By using Naive Bayes classification, it can predict the likelihood that a particular course aligns with the interests of a given student based on course features. Collaborative filtering then examines patterns in student character and identifies similar users to recommend courses based on what others with similar learning habits have chosen.

The system also adds a dynamic weight adjustment feature that adjusts the model’s recommendations depending on whether a student is a new user or an experienced one. This mechanism improves the precision and diversity of the suggestions, ensuring that the system remains useful for all types of students.

The team tested the system with data from 25,000 students and 1,000 courses. Compared to traditional methods, it demonstrated a 12% improvement in Precision@10 (the percentage of relevant courses within the top 10 recommendations) and a 10.5% improvement in Recall@10 (the percentage of relevant courses among the top 10 recommendations). Most notably, in cold start scenarios, the hybrid model significantly outperformed deep neural networks. Even with a data sparsity of 98%, the hybrid model’s accuracy fell at half the rate of traditional algorithms.

Chen, Z. and He, M. (2026) ‘Research on integrating naive Bayes and collaborative filtering into an online-course recommendation model for universities’, Int. J. Computational Systems Engineering, Vol. 10, No. 6, pp.12–21.

2 April 2026

Teach your children well

A study of junior high schools in Indonesia has found that educational leadership influences how well they cultivate entrepreneurial skills in their students. Indeed, these kind be improved by encouraging innovation from the top and by fostering collaborative environments in which students, teachers, and communities all work together to shape educational outcomes. The details are reported in the International Journal of Business Innovation and Research.

The research surveyed 350 schools and examined the relationship between entrepreneurial leadership and entrepreneurial performance. Entrepreneurial leadership refers to a style of management that prioritises vision, innovation, and the mobilisation of others. In schools, this translates into principals and senior staff who support experimentation in teaching, promote creative problem-solving, and encourage initiative among both students and educators.

Entrepreneurial performance, on the other hand, is defined more broadly than business creation. It includes the ability of a school to generate innovative activities, equip students with problem-solving and adaptive skills, and contribute to longer-term socio-economic objectives such as employability and resilience in changing labour markets.

The study’s main finding is that leadership alone is not the sole driver of such outcomes in educations. Rather, its effects are mediated by what researchers describe as value co-creation. This term derives from service management theory and refers to a process in which value is produced through interaction, rather than being delivered unilaterally by an organisation to passive recipients. In the educational context, this implies a shift away from viewing teaching as a one-way transfer of knowledge, towards a model in which students, teachers, school leaders, and other stakeholders work together to design appropriate learning experiences and solve problems.

In countries where entrepreneurship plays a significant role in economic development, schools are increasingly seen as a foundation for developing the entrepreneurial mindset in students. The research indicates that policy initiatives which focus solely on embedding entrepreneurship in the curriculum may not work as well as those that also improve and guide leadership practices and institutional culture.

Indira, S.S., Sasmoko S., Bandur, A. and Pradipto, Y.D. (2026) ‘Business perspectives on value cocreation as a mediator for entrepreneurial performance in educational contexts’, Int. J. Business Innovation and Research, Vol. 39, No. 8, pp.1–24.

1 April 2026

Adapting to AI adoption

Research in the International Journal of Business Information Systems suggests that the adoption of artificial intelligence (AI) is remarkably uneven across Italian firms. While some may have made a deliberate choice not to use AI, of the many that are planning to use it, some still lack the organisational structures needed to deploy the technology effectively.

This is one of the first systematic studies of AI adoption in Italy. It found that there are lots of early innovators eagerly integrating AI into their operations, but others are moving more cautiously and remain in the preliminary stages of exploration. This uneven uptake is seen elsewhere and reflects a broader international pattern, as businesses look for AI opportunities but struggle with the complexities of this rapidly evolving area of computing.

Despite the growing interest and investment in, specifically, generative AI, this research shows that many firms do not have a structured approach to the technology. The researchers propose an “AI Readiness Level” (AIRL) framework that could help organisations develop their AI strategy.

This notion of readiness is not just about technical capability, it takes into account the quality of a company’s data infrastructure, the availability of skilled personnel, leadership support, and external factors such as regulatory pressures or market competition. AIRL provides a model of the progressive stages of development, from initial awareness to full operational integration.

The team points out that firms that have adopted AI have reported improvements in operational efficiency, enhanced customer engagement, and more informed decision-making through predictive analytics. The research suggests that adopting AI is less a matter of installing new software than carrying out organisational transformation. Companies need to align their technological capabilities with workforce skills, management strategies, and governance structures, the authors explain. Those that fail to do so risk falling behind competitors that are already using this technology to their advantage.

Garlatti Costa, G., Pugliese, R. and Venier, F. (2026) ‘Exploring artificial intelligence adoption among Italian firms: the AI readiness level’, Int. J. Business Information Systems, Vol. 51, No. 7, pp.1–22.

31 March 2026

Greening the supply chain

Research in the International Journal of Environment and Pollution has looked at carbon-reduction strategies across supply chains. The findings suggest that uncertainty in consumer demand need not preclude environmental gains.

The team looked at a four-stage supply chain, encompassing suppliers, producers, retailers, and consumers. They used a structured economic model, the Stackelberg game, to examine the dominant “actor”, in this case the manufacturer. The dominant actor makes the initial decisions, and the other players adjust their behaviour accordingly. Such a sequential decision-making framework models the way many industries function, where firms exert influence over pricing and production conditions downstream.

In contrast to other studies that have isolated individual parts of the supply chain, this latest study adopts a system-wide perspective. In it, retailers are not merely intermediaries but are active participants shaping demand. As such, retailers then influence consumer behaviour through pricing strategies and promotional efforts, such as emphasising low-carbon products or highlighting environmental credentials. This affects consumer decisions about the price of “greener” goods, and this then feeds back into the incentives at the manufacturer level for reducing emissions and pollution earlier in production.

The challenge in green manufacturing is demand uncertainty. Firms somehow need to be able to predict how positively consumers would respond to those greener, low-carbon products. This uncertainty complicates investment decisions. The research indicates that supply chain participants can still achieve what economists term Pareto improvements, where at least one party benefits without leaving others worse off, through coordinated adjustments in pricing, subsidies and emission reduction efforts.

The results reveal a set of trade-offs. Subsidies aimed at boosting retail promotion tend to increase marketing efforts and allow retailers to charge higher prices, reflecting stronger consumer demand for environmentally friendly products. However, these same measures weaken the producers’ incentives to invest in their own emission reductions and may lead to higher wholesale prices. The overall effect, however, is emission reduction across the supply chain, suggesting that policies or strategies that appear inefficient at the manufacturer level may still deliver environmental benefits.

Shen, Q. and Hou, X. (2026) ‘Carbon reduction coordination and pricing strategy of a four-level supply chain under demand uncertainty’, Int. J. Environment and Pollution, Vol. 76, No. 5, pp.36–57.

30 March 2026

The Internet of Things can only get better

The rapid expansion of the Internet of Things (IoT) has changed how digital systems interact with the physical world. Millions, if not billions, of connected devices, from household appliances to industrial machinery, environmental sensors, medical diagnostic tools, and more, collect and exchange data with minimal human intervention.

This growing “network” has led to the automation of many mundane tasks as well as enormous improvements in efficiency across all these areas and beyond. However, researchers writing in the International Journal of Critical Infrastructures warn that the increasing complexity of the digital world brings with it vulnerabilities. This is perhaps of growing interest and concern as artificial intelligence is incorporated into the way in which IoT devices work.

The team explains that many IoT devices have limited computing resources, and so they are constrained in terms of how well they can address security issues. As a result, many devices are security targets and can, for instance, be added to so-called botnets, networks of affected machines used to carry out bigger attacks on networks and infrastructure using Distributed Denial of Service (DDoS) attacks and other methods.

Addressing these problems is vital if critical IoT systems are to be protected in energy grids, medical environments, factories, and across so-called smart cities. The research focuses on anomaly detection as a powerful strategy for identifying potential threats and system failures. Unlike standard rule-based security systems that use predefined patterns of known threats, anomaly detection can use machine learning to identify patterns based on training data and algorithmic analysis rather than explicit programming.

As IoT technology spreads, anomaly detection in real time is an essential part of implementation and a requirement for maintaining system integrity. Failures or breaches in interconnected systems could have cascading effects, disrupting essential services and undermining public trust.

Ultimately, securing IoT networks through this kind of proactive monitoring is not just a technical necessity but a safeguard for infrastructure that depends on all those millions of devices.

Xu, J. (2026) ‘Integrating IoT and machine learning for scalable anomaly detection in smart city infrastructure’, Int. J. Critical Infrastructures, Vol. 22, No. 10, pp.1–16.

27 March 2026

Location, emplacement, posizione

A new way for computers to recognise and translate complex place names is reported in the International Journal of Information and Communication Technology. The approach offers a roadmap to address a long-standing weakness in digital language systems used for mapping, navigation, and international communication.

Place names often carry historical, geographical, and cultural significance, and errors in translation can lead to confusion or loss of context. More accurate handling of such names could improve digital maps, navigation systems, logistics platforms, and multilingual communication tools.

The research focuses on English-derived place names, those created by adding prefixes, suffixes, or descriptive elements to existing names. While common in geographic data, these constructions are hard for automated systems to work with because they combine meaning and pronunciation in ways that do not transfer neatly across languages.

To address this, the researchers developed a computational model that integrates two complementary approaches: a knowledge graph and a phonetic generation algorithm. A knowledge graph is a structured representation of information that maps relationships between concepts, allowing the system to understand how place names are formed and how their components relate to one another. This captures the semantic dimension of language, its meaning and contextual associations.

The phonetic generation algorithm focuses on the sound of the spoken names. It converts written words into standardised representations of pronunciation, enabling the system to align how a place name is written with how it is spoken. This is particularly important in translation, where names often need to preserve recognisable sounds alongside meaning.

These two elements interact using what the team refers to as a bidirectional dynamic interaction fusion mechanism. In this system, the semantic and phonetic information feed each other to improve recognition and translation. The system also uses a Long Short-Term Memory (LSTM) network, a type of neural network commonly used for language processing.

The model demonstrated an error rate of just 1.3 per cent in recognising place names and 0.8 per cent in translating them. Its outputs are more than 95 per cent fluent and consistent.

Ma, D. (2026) ‘English-derived place name recognition and translation based on knowledge graph and phonetic generation algorithm’, Int. J. Information and Communication Technology, Vol. 27, No. 27, pp.109–132.