Review of Artificial Intelligence-Based Systems: Evaluation, Standards, and Methods

Document Type : Review Article

Authors

1 Center for Innovation and Development of AI (CIDAI), ICT Research Institute

2 manager in Center for Innovation and Development of AI (CIDAI), ICT Research Institute

10.22034/asas.2024.450378.1055

Abstract

The rapid expansion of artificial intelligence (AI) technologies and algorithms across various industries globally necessitates a focus on protecting the public interest. Major economies have heavily invested in AI initiatives, underscoring the significance of these advancements. To mitigate potential risks stemming from AI failures, ensuring the dependability and quality of these systems is crucial. In response, there have been efforts to establish monitoring frameworks and evaluation standards for AI products.
This paper conducts a comprehensive analysis of more than 200 standards and publications to identify quantitative and qualitative metrics for evaluating AI systems throughout their development stages. The study also examines the methodologies, checklists, and standards associated with these assessment criteria. The findings emphasize the importance of implementing robust evaluation frameworks to ensure the safety and effectiveness of AI systems. By synthesizing various metrics and standards, this research offers valuable insights for policymakers, regulators, and industry professionals aiming to enhance AI oversight and governance. Moreover, the study highlights the necessity of continuous monitoring and evaluation throughout the AI development process to address potential risks and challenges. By advocating for transparency and accountability in AI practices, stakeholders can build trust and confidence in the deployment of these technologies.

Keywords

Main Subjects