16 September 2024

Minting wafer thin defect detection

Research published in the International Journal of Information and Communication Technology may soon help solve a long-standing challenge in semiconductor manufacture: the accurate detection of surface defects on silicon wafers. Crystalline silicon is the critical material used in the production of integrated circuits and in order to provide the computing power for everyday electronics and advanced automotive systems needs to be as pristine as possible prior to printing of the microscopic features of the circuit on the silicon surface.

Of course, no manufacturing technology is perfect and the intricate process of fabricating semiconductor chips inevitably leads to some defects on the silicon wafers. This reduces the number of working chips in a batch and leads to a small, but significant proportion of the production line output failing.

The usual way to spot defects on silicon wafers has been done manually, with human operators examining each wafer by eye. This is both time-consuming and error-prone due to the fine attention to detail required. As wafer production has ramped up globally to meet demand and the defects themselves have become harder to detect by eye, the limitations of this approach have become more apparent.

Chen Tang, Lijie Yin and Yongchao Xie of the Hunan Railway Professional Technology College in Zhuzhou, Hunan Province, China explain that automated detection systems have emerged as a possible solution. These too present efficiency and accuracy issues in large-scale production environments. As such, the team has turned to deep learning, particularly convolutional neural networks (CNNs), to improve wafer defect detection.

The researchers explain that CNNs have demonstrated significant potential in image recognition. They have now demonstrated that this can be used to identify minute irregularities on the surface of a silicon wafer. The “You Only Look Once” series of object detection algorithms is well known for being able to balances accuracy against detection speed.

The Hunan team has taken the YOLOv7 algorithm a step further to address the specific problems faced in wafer defect detection. The main innovation in the work lies in using SPD-Conv, a specialized convolutional operation to enhance the ability of the algorithm to extract fine details from images of silicon wafers. Additionally, the researchers incorporated a Convolutional Block Attention Module (CBAM) into the model to sharpen the system’s focus on smaller defects that are often missed in manual inspection or by other algorithms.

When tested on the standard dataset (WM-811k) for assessing wafer defect detection systems, the team’s refined YOLOv7 algorithm achieved a mean average precision of 92.5% and had a recall rate of 94.1%. It did this quickly, at a rate of 136 images per second, which is faster than earlier systems.

Tang, C., Yin, L. and Xie, Y. (2024) ‘Wafer surface defect detection with enhanced YOLOv7’, Int. J. Information and Communication Technology, Vol. 25, No. 6, pp.1–17.

13 September 2024

Research pick: Burying the carbon - "In-depth analysis of coal chemical structural properties response to flue gas saturation: perspective on long-term CO2 sequestration"

Odd as it may seem, coal seams that cannot be mined might provide an underground storage medium for carbon dioxide produced by industries burning coal above ground. Research in the International Journal of Oil, Gas and Coal Technology has undertaken controlled experiments designed to simulate the deep geological environments where carbon dioxide might be trapped as a way to reduce the global carbon footprint and ameliorate some of the impact of our burning fossil fuels. Coal seams represent a potential repository for long-term storage of carbon dioxide sequestered from flue gases, as they can trap a lot of carbon dioxide gas in a small volume.

Major Mabuza of the University of Johannesburg, Johannesburg, Kasturie Premlall of Tshwane University of Technology, Pretoria, and Mandlenkosi G.R. Mahlobo of the University of South Africa, Florida, South Africa, subjected coals to a synthetic flue gas for 90 days at high pressure (9.0 megapascals) and a mildly high raised temperatures of 60 degrees Celsius. These conditions were intended to replicate the pressures and temperatures found deep underground, providing a realistic model for how coal might behave when used for carbon dioxide sequestration.

The team then looked at how the chemical structure of coal was changed by exposure to flue gas under these conditions using various advanced analytical chemistry techniques – carbon-13 solid-state nuclear magnetic resonance spectroscopy, universal attenuated total reflectance-Fourier transform infrared spectroscopy, field emission gun scanning electron microscopy with energy dispersive X-ray spectroscopy, and wide-angle X-ray diffraction.

The results showed that exposure to synthetic flue gas led to major changes to the chemical makeup of the coal. For instance, key functional groups, such as aliphatic hydroxyl groups, aromatic carbon-hydrogen bonds, and carbon-oxygen bonds, were all weakened by the process and the overall physical properties of the coal were also changed.

By clarifying how coal interacts with flue gas under simulated, but realistic, conditions, the team fills important gaps in our knowledge about the long-term stability and effectiveness of carbon dioxide storage below ground and specifically in coal seams.

Mabuza, M., Premlall, K. and Mahlobo, M.G.R. (2024) ‘In-depth analysis of coal chemical structural properties response to flue gas saturation: perspective on long-term CO2 sequestration’, Int. J. Oil, Gas and Coal Technology, Vol. 36, No. 5, pp.1–17.

Free Open Access article available: "In-depth analysis of coal chemical structural properties response to flue gas saturation: perspective on long-term CO2 sequestration"

The following paper, "In-depth analysis of coal chemical structural properties response to flue gas saturation: perspective on long-term CO2 sequestration" (International Journal of Oil, Gas and Coal Technology 36(5) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

12 September 2024

Research pick: Cancelling the curse - "An improved continuous and discrete Harris Hawks optimiser applied to feature selection for image steganalysis"

Research in the International Journal of Computational Science and Engineering describes a new approach to spotting messages hidden in digital images. The work contributes to the field of steganalysis, which plays a key role in cybersecurity and digital forensics.

Steganography involves embedding data within a common media, such as words hidden among the bits and bytes of a digital image. The image looks no different when displayed on a screen, but someone who knows there is a hidden message can extract or display the message. Given the vast numbers of digital images that now exist, and that number grows at a remarkable rate every day, it is difficult to see how such hidden information might be found by a third party, such as law enforcement. Indeed, in a sense it is security by obscurity, but it is a powerful technique nevertheless. There are legitimate uses of steganography, of course, but there are perhaps more nefarious uses and so effective detection is important for law enforcement and security.

Ankita Gupta, Rita Chhikara, and Prabha Sharma of The NorthCap University in Gurugram, India, have introduced a new approach that improves detection accuracy while addressing the computational challenges associated with processing the requisite large amounts of data.

Steganalysis involves identifying whether an image contains hidden data. Usually, the spatial rich model (SRM) is employed to detect those hidden messages. It analyses the image to identify tiny changes in the fingerprint that would be present due to the addition of hidden data. However, SRM is complex, has a large number of features, and can overwhelm detection algorithms, leading to reduced effectiveness. This issue is often referred to as the “curse of dimensionality.”

The team has turned to a hybrid optimisation algorithm called DEHHPSO, which combines three algorithms: the Harris Hawks Optimiser (HHO), Particle Swarm Optimisation (PSO), and Differential Evolution (DE). Each of these algorithms was inspired by natural processes. For example, the HHO algorithm simulates the hunting behaviour of Harris Hawks and balances exploration of the environment with targeting optimal solutions. The team explains that by combining HHO, PSO, and DE, they can work through complex feature sets much more quickly than is possible with a current single algorithm, however sophisticated.

The hybrid approach reduces computational demand by eliminating more than 94% of the features that would otherwise have to be processed. The stripped back information can then be processed with a support vector machine (SVM) classifier. The team says this method works better than meta-heuristic (essentially trial-and-error methods) and better even than several deep learning methods, which are usually used to solve more complex problems than steganalysis.

Gupta, A., Chhikara, R. and Sharma, P. (2024) ‘An improved continuous and discrete Harris Hawks optimiser applied to feature selection for image steganalysis’, Int. J. Computational Science and Engineering, Vol. 27, No. 5, pp.515–535.

Prof. Rongbo Zhu appointed as new Editor in Chief of International Journal of Radio Frequency Identification Technology and Applications

Prof. Rongbo Zhu from Huazhong Agricultural University in China has been appointed to take over editorship of the International Journal of Radio Frequency Identification Technology and Applications.

11 September 2024

Research pick: Brighter days for business with clouds - "Analysing the cloud efficacy by fuzzy logic"

Cloud computing has become an important part of information technology ventures. It offers a flexible and cost-effective alternative to conventional desktop and local computer infrastructures for storage, processing, and other activities. The biggest advantage to startup companies is that while conventional systems require significant upfront investment in hardware and software, cloud computing gives them the power and capacity on a “pay-as-you-go” basis. This model not only reduces initial capital expenditures at a time when a company may need to invest elsewhere but also allows businesses to scale their resources based on demand without extensive, repeated, and costly physical upgrades.

A study in the International Journal of Business Information Systems has highlighted the role of fuzzy logic in evaluating the cost benefits of migrating to cloud computing. Fuzzy logic, a method for dealing with uncertainty and imprecision, offers a more flexible approach compared to traditional binary logic. Fuzzy logic recognises the shades of grey inherent in most business decisions rather than seeing things in black and white.

The team, Aveek Basu and Sraboni Dutta of the Birla Institute of Technology in Jharkhand, and Sanchita Ghosh of the Salt Lake City Electronics Complex, Kolkata, India, explains that conventional cost-benefit analyses often fall short when assessing cloud migration due to the inherent unpredictability in factors such as data duplication, workload fluctuations, and capital expenditures. Fuzzy logic, on the other hand, addresses these challenges by allowing decisions to be made that take into account the uncertainties of the real world.

The team applied fuzzy logic to evaluate three factors associated with the adoption of cloud computing platforms. First, the probability of data duplication, secondly capital expenditure, and finally workload variation. By incorporating these different factors into the analysis, the team obtained a comprehensive view of the potential benefits and drawbacks of cloud computing from the perspective of a startup company. The approach offers a more adaptable assessment than traditional models.

One of the key findings is that cloud computing leads to a huge reduction in the complexity and costs associated with managing business software and the requisite hardware as well as the endless upgrades and IT support often needed. Cloud service providers manage all of that on behalf of their clients, allowing the business to focus instead on its primary operations rather than IT.

Basu, A., Ghosh, S. and Dutta, S. (2024) ‘Analysing the cloud efficacy by fuzzy logic’, Int. J. Business Information Systems, Vol. 46, No. 4, pp.460–490.

10 September 2024

Research pick: A rare take on green metal volatility - "Price and volatility of rare earths"

Research in the International Journal of Global Energy Issues has looked at the volatility of rare earth metals traded on the London Stock Exchange. The work used an advanced statistical model known as gjrGARCH(1,1) to follow and predict market turbulence. It was found to be the best fit for predicting rare earth price volatility and offers important insights into the stability of these crucial resources.

Auguste Mpacko Priso of Paris-Saclay University, France and the Open Knowledge Higher Institute (OKHI), Cameroon, with OKHI colleague explain that the rare earths, are a group of 17 metals* with unique and useful chemical properties. They are essential to high-tech products and industry, particularly electric vehicle batteries and renewable energy infrastructure. They are also used in other electronic components, lasers, glass, magnetic materials, and as components of catalysts for a range of industrial processes. As the global transition to reduced-carbon and even zero-carbon technologies moves forward, there is an urgent need to understand the pricing of rare earth metals, as they are such an important part of the technology we need for that environment friendly future.

The team compared the volatility of rare earth prices with that of other metals and stocks. Volatility, or the degree of price fluctuation, was found to be persistent in rare earths, meaning that prices tend to fluctuate continually over time rather than reaching a stable point quickly. For investors and manufacturers dependent on these metals, such constant volatility poses a substantial economic risk. As such, forecasting the price changes might be used to mitigate that. It might lead to greater stability and allowing investors to work in this area secure in the returns they hope to see.

Other models used in stock price prediction failed to model the volatility of the rare earth metals well, suggesting that this market has distinctive characteristics that affect prices differently from other more familiar commodities. Given that the demand and use of rare earth metals is set to surge, there is a need to understand their price volatility and to take this into account in green investments and development. It is worth noting that there is a major political component in this volatility given that China, and other nations, with vast reserves of rare earth metal ores, do not necessarily share the political views or purpose of the nations demanding these resources.

Mpacko Priso, A. and Doumbia, S. (2024) ‘Price and volatility of rare earths’, Int. J. Global Energy Issues, Vol. 46, No. 5, pp.436–453.

*Rare earth metals: cerium, dysprosium, erbium, europium, gadolinium, holmium, lanthanum (sometimes considered a transition metal), lutetium, neodymium, praseodymium, promethium, samarium, scandium, terbium, thulium, ytterbium, yttrium

9 September 2024

Research pick: Shipping included: boosting port efficiency - "Transhipment: when movement matters in port efficiency"

Container ports are important hubs in the global trade network. They have seen enormous growth in their roles over recent years and operational demands are always changing, especially as more sophisticated logistics systems emerge. A study in the International Journal of Shipping and Transport Logistics sheds new light on how the changes in this sector are affecting port efficiency, the focus is on the different types of container activities.

Fernando González-Laxe of the University Institute of Maritime Studies, A Coruña University and Xose Luis Fernández and Pablo Coto-Millán of the Universidad de Cantabria, Santander, Spain, explain that container ports handle cargo that is packed in standardized shipping containers, the big metal boxes with which many people are familiar commonly transported en masse on vast sea-going vessels, unloaded port-side, and loaded on to trains and road transporters for their onward journey. The increasing size of ships used for transporting these containers, some of which can carry up to 25000 TEUs (twenty-foot equivalent units, the containers), means there is increasing pressure on ports to increase their capacity. As such, there is a lot of ongoing effort to automate processes and optimize port operations to allow the big container ports to remain viable and competitive.

The team used Data Envelopment Analysis (DEA) to evaluate the efficiency of container ports by comparing the input and output of their operations. The focused on ten major Spanish container ports – among them the major ports of Algeciras, Barcelona, and Valencia – in order to understand how various types of container activities – import/export, transshipment, and cabotage (coastal shipping) – influence port performance.

One of the key findings from the study is the relationship between port efficiency and the types of container activities handled. The team found that there is an inverted U-shape relationship: ports that balanced transshipment (transferring containers between ships at intermediate points) with import/export activities tended to perform better than those that specialized in only one type of activity. This suggests that a diversified approach to container activities may enhance port efficiency.

The work suggests that by adopting a balanced approach to their activities, container ports could boost efficiency and reinforce their role in the global supply chain.

González-Laxe, F., Fernández, X.L. and Coto-Millán, P. (2024) ‘Transhipment: when movement matters in port efficiency’, Int. J. Shipping and Transport Logistics, Vol. 18, No. 4, pp.383–402.

Free Open Access article available: "Transhipment: when movement matters in port efficiency"

The following paper, "Transhipment: when movement matters in port efficiency" (International Journal of Shipping and Transport Logistics 18(4) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

6 September 2024

Research pick: AI learns elephant talk - "Elephant sound classification using machine learning algorithms for mitigation strategy"

Dr Dolittle eat your heart out! Researchers writing in the International Journal of Engineering Systems Modelling and Simulation demonstrate how a trained algorithm can identify the trumpeting calls of elephants, distinguishing them from human and other animals sounds in the environment. The work could improve safety for villagers and help farmers protect their crops and homesteads from wild elephants in India.

T. Thomas Leonid of the KCG College of Technology and R. Jayaparvathyof the SSN College of Engineering in Chennai, India, explain how conflicts between people and elephants are becoming increasingly common, especially in areas where human activity has encroached on natural elephant habitats. This is particularly true where agriculture meets forested land. These conflicts are not just an environmental concern, they pose a thread to human life and livelihoods.

In India, wild elephants are responsible for more human fatalities than large predators. Their presence also leads to the destruction of crops and infrastructure, which creates a heavy financial burden on rural communities. Of course, the elephants are not to blame, they are wild animals, doing their best to survive. The root causes lie in habitat destruction due to human activities such as mining, dam construction, and increasing encroachment into forests for resources like firewood and water.

As such, finding effective solutions to mitigate human-elephant encounters is becoming increasingly urgent. The team suggests that a way to reduce the number of tragic and costly outcomes would be to put in place an early-warning system. Such a system would recognise elephant behaviour from their vocalisations and allow farmers and others to avoid the elephants or perhaps even safely divert an incoming herd before it becomes a serious and damaging hazard.

The researchers compared several machine learning models to determine which one best detects and classifies elephant sounds. The models tested included Support Vector Machines (SVM), K-nearest Neighbours (KNN), Naive Bayes, and Convolutional Neural Networks (CNN). They trained each of these algorithms on a dataset of 450 animal sound samples from five different species. One of the key steps in the process is feature extraction, which involves identifying distinctive characteristics within the audio signals, such as frequency, amplitude, and the temporal structure of the sounds. These features are then used to train the machine learning models to recognise elephant calls.

The most accurate was the Convolutional Neural Network (CNN), a deep learning model that automatically learns complex features from raw data. CNNs are particularly well-suited for this type of task due to their ability to recognise intricate patterns in sound data. The CNN had a high accuracy of 84 percent, far better than the models. This might be improved, but is sufficiently accurate to have potential for a reliable, automated system to detect elephants on the march that might be heading towards homes and farms.

Leonid, T.T. and Jayaparvathy, R. (2024) ‘Elephant sound classification using machine learning algorithms for mitigation strategy’, Int. J. Engineering Systems Modelling and Simulation, Vol. 15, No. 5, pp.248–252.

5 September 2024

Research pick: A leap towards an emotion detector - "Dynamic emotion recognition of human face based on convolutional neural network"

Research in the International Journal of Biometrics introduces a method to improve the accuracy and speed of dynamic emotion recognition using a convolutional neural network (CNN) to analyse faces. The work undertaken by Lanbo Xu of Northeastern University in Shenyang, China, could have applications mental health, human-computer interaction, security, and other areas.

Facial expressions are a major part of non-verbal communication, providing clues about an individual’s emotional state. Until now, emotion recognition systems have used static images, which means they cannot capture the changing nature of emotions as they play out over a person’s face during a conversation, interview or other interaction. Xu’s work addresses this by focusing on video sequences. The system can track changing facial expressions over a series of video frames and then offer a detailed analysis of how a person’s emotions unfold in real time.

However, prior to analysis, the system applies an algorithm, the “chaotic frog leap algorithm” to sharpen key facial features. The algorithm mimics the foraging behaviour of frogs to find optimal parameters in the digital images. The CNN trained on a dataset of human expressions is the most important part of the approach, allowing Xu to process visual data by recognizing patterns in new images that intersect with the training data. By analysing several frames from video footage, the system can capture movements of the mouth, eyes, and eyebrows, which are often subtle but important indicators of emotional changes.

Xu reports an accuracy of up to 99 percent, with the system providing an ouput in a fraction of a second. Such precision and speed is ideal for real-time use in various areas where detecting emotion might be useful without the need for subjective assessment by another person or team. Its potential applications lie in improving user experiences with computer interactions where the computer can respond appropriately to the user’s emotional state, such as frustration, anger, or boredom.

The system might be useful in screening people for emotional disorders without initial human intervention. It could also be used in enhancing security systems allowing access to resources but only to those in a particular emotional state and barring entry to an angry or upset person, perhaps. The same system could even be used to identify driver fatigue on transport systems or even in one’s own vehicle. The entertainment and marketing sectors might also see applications where understanding emotional responses could improve content development, delivery, and consumer engagement.

Xu, L. (2024) ‘Dynamic emotion recognition of human face based on convolutional neural network’, Int. J. Biometrics, Vol. 16, No. 5, pp.533–551.

4 September 2024

Free Open Access article available: "The role of pre-formation intangible assets in the endowment of science-based university spin-offs"

The following paper, "The role of pre-formation intangible assets in the endowment of science-based university spin-offs" (International Journal of Technology Management 96(4) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

Research pick: Cyber shields up! - "Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module"

As computer network security threats continue to grow in complexity, the need for more advanced security systems is obvious. Indeed, traditional methods of intrusion detection have struggled to keep pace with the changes and so researchers are looking to explore alternatives. A study in the International Journal of Computational Systems Engineering suggests that the integration of data augmentation and ensemble learning methods could be used to improve the accuracy of intrusion detection systems.

Xiaoli Zhou of the School of Information Engineering at Sichuan Top IT Vocational Institute in Chengdu, China, has focused on a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP). This is an advanced version of the standard machine learning model and can create realistic data through a process of competition between two neural networks. Conventional GANs often suffer from unstable training and pattern collapse, where the model fails to generate diverse data. The WGAN-GP variant mitigates these issues by incorporating a gradient penalty, according to the research, this helps to stabilize the training process and improve the quality of the generated data. It can then be used effectively to simulate network traffic for intrusion detection with a view to blocking hacking attempts.

There is the potential to enhance the WGAN-GP data quality still further by combining it with a stacking learning module. Stacking is an ensemble learning technique that involves training multiple models and then combining their outputs using a meta-classifier. In Zhou’s work, the stacking module integrates the predictions from several WGAN-GP models to allow them to be classified as normal or intrusive.

The approach was tested against well-established data augmentation methods, including the Synthetic Minority Over-sampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), and a simple version of WGAN. The results showed that the WGAN-GP-based model had an accuracy rate of almost 90%, which is better than the scores for the other techniques tested. The model can thus distinguish between legitimate and potentially harmful network activity effectively. Optimisation might improve the accuracy and allow the system to be used to protect governments, corporations, individual, and others at risk from network security threats.

Zhou, X. (2024) ‘Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module’, Int. J. Computational Systems Engineering, Vol. 8, No. 6, pp.1–10.

Free Open Access article available: "Researching together in academic engagement in engineering: a study of dual affiliated graduate students in Sweden"

The following paper, "Researching together in academic engagement in engineering: a study of dual affiliated graduate students in Sweden" (International Journal of Technology Management 96(4) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

3 September 2024

Free Open Access article available: "Measuring user acceptance of e-government adoption in an Indonesian context: a study of the extended technology acceptance model"

The following paper, "Measuring user acceptance of e-government adoption in an Indonesian context: a study of the extended technology acceptance model" (International Journal of Electronic Governance16(2) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

Free sample articles newly available from International Journal of Financial Markets and Derivatives

The following sample articles from the International Journal of Financial Markets and Derivatives are now available here for free:
  • Dynamic correlations of bond and equity futures and macroeconomic determinants: international evidence
  • The Commitment of Traders report as a trading signal? Short-term price reversals and market efficiency in the US-futures market
  • Finite difference solutions of the CEV PDE
  • The relative efficiency of investment grade credit and equity markets
  • The effect of bank diversification on the capital, risk, profitability and efficiency of the eurozone and the US banks in the aftermath of the global financial crisis

Research pick: Science-based spin-offs - "The role of pre-formation intangible assets in the endowment of science-based university spin-offs"

Science-based university spin-offs, especially in the biotech sector, play an important role in transforming cutting-edge academic science into marketable technological products. However, such start-ups face lots of challenges that can be very different from those encountered by conventional startups. Research in the International Journal of Technology Management has looked at the complexities and potential of such spin-offs and sheds new light on the role played by the academic scientists involved in the process and how launch timing can make all the difference.

Andrew Park of the University of Victoria, Canada, and colleagues explain that unlike typical start-ups, which might bring a product to market relatively quickly, new biotechnology companies often have long periods of financial investment and require lengthy development, testing, and regulatory periods for their products. This is particularly true in drug development, where the path from the laboratory bench to the marketplace can span a decade or more, not least because of the need for extensive clinical trials and the completion of regulatory requirements. As such, there is often a greater need to plan strategically and to use resources more effectively even before the spin-off company is officially launched.

Many laboratory scientists make the leap from bench to business, some with much greater success than others. The successful scientist-entrepreneurs bring with them their research acumen and intellectual property, but also various intangible assets that can make or break a spin-off company. Among those intangibles might be research publications and patents, networks of contacts and collaborators, and access to funding opportunities that might be unavailable to companies with no direct academic links

The paper’s case studies of three biotechnology spin-offs within the British Columbia innovation ecosystem suggests that the value of intangible assets is usually only realised when strong entrepreneurial capabilities are available to the start-up company. These capabilities are not just about business acumen but also an understanding of how to align the technology with market needs, protect intellectual property effectively, and mentor the founding team to reach biotech commercialization successfully. Critically, the timing of a company launch can correlate strongly with success or failure, the researchers found.

Park, A., Goudarzi, A., Yaghmaie, P., Thomas, V.J. and Maine, E. (2024) ‘The role of pre-formation intangible assets in the endowment of science-based university spin-offs’, Int. J. Technology Management, Vol. 96, No. 4, pp.230–260.

Free Open Access article available: "Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module"

The following paper, "Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module" (International Journal of Computational Systems Engineering 8(6) 2024), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

2 September 2024

Free sample articles newly available from International Journal of Advanced Media and Communication

The following sample articles from the International Journal of Advanced Media and Communication are now available here for free:
  • Detection of metamorphic malicious mobile code on android-based smartphones
  • Business computing education: a radical approach for efficient streamlining of an effective education process and relevant curriculum
  • Decision support system for course enrolment management using qualitative information
  • Korea's strategies for mobile technology standards in smart ecosystem
  • An UHD video handling system using a scalable server over an IP network

Free Open Access article available: "Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities"

The following paper, "Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities" (International Journal of Metadata, Semantics and Ontologies 16(4) 2023), is freely available for download as an open access article.

It can be downloaded via the full-text link available here.

Research pick: Framing research metadata - "Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities"

A multi-centre research team writing in the International Journal of Metadata, Semantics and Ontologies discusses how they hope to fill a significant gap in the documentation and sharing of research data by focusing on “contextual metadata.” The researchers explain that traditionally, research metadata has usually been about research outputs, such as publications or datasets. The new stance considers the detailed information about the research process, such as how the data was generated, the techniques used, and the specific conditions under which the research was conducted.

The project considered six research domains across the life sciences, social science, and the humanities. Semi-structured interviews and literature review allowed the team to unravel how researchers in each domain manage this kind of contextual metadata. They found that although a considerable amount of such metadata is available, it is often implicit and scattered across various documentation fields. This fragmentation makes it difficult to identify and use the information effectively.

The team thus suggests that there is a need for a standardized framework for contextual metadata that could be used across all disciplines. Such a framework would support future work to look at the replicability and reproducibility of research, which are important in scientific integrity and validation. Replicability refers to the ability to duplicate a study’s results under the same conditions, while reproducibility involves obtaining consistent results using the same datasets and methods.

Additionally, a standardized approach to contextual metadata could reduce research waste and even help reduce research misconduct by providing a clearer and more consistent way to document research processes. However, there remain many challenges because of the diverse nature of research practices across different disciplines. Differences in funding models, regulatory requirements, and methods mean that a universal framework might not be directly applicable to all fields. As such, the team has proposed a generic framework that recognize the need for domain-specific adaptations.

Ohmann, C., Panagiotopoulou, M., Canham, S., Holub, P., Majcen, K., Saunders, G., Fratelli, M., Tang, J., Gribbon, P., Karki, R., Kleemola, M., Moilanen, K., Broeder, D., Daelemans, W. and Fivez, P. (2023) ‘Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities’, Int. J. Metadata Semantics and Ontologies, Vol. 16, No. 4, pp.261–277.