Cyber Security

Product Testing

Cyber Security

Products Security

Software Quality

PKI (PKE/CA)

Smart Cards

Hardware Security

Industrial Control System

Source Code

AI Product

Security Evaluation Laboratory

The Security Laboratory of RCII, recognized as the first IT security laboratory in Iran, has achieved a significant milestone by becoming the first in the country to obtain the ISO/IEC 17025 certification.

Evaluation Standards and Methodologies:

The Following Standards, frameworks and methodologies enable RCII to thoroughly identify security vulnerabilities and weaknesses, ensuring that all evaluated IT products meet rigorous global security standards.

ISO/IEC 15408

The security evaluation tests at RCII are conducted in accordance with the ISO/IEC 15408 standard, also known as the Common Criteria (CC). This standard is divided into five key parts:

  • Part1: Introduction to Security Concepts

    Outlines the foundational principles of IT security.

  • Part2: Security Functional Requirements

    Defines the specific security features that IT products must provide.

  • Part3: Security Assurance Requirements

    Describes the processes and measures to ensure these security features are properly implemented and maintained.

  • Part4: Evaluation Methods and Activities

    This part provides a framework for specifying evaluation methods and activities, offering guidelines for evaluators on how to conduct the evaluations

  • Part5: Evaluation Assurance Levels (EALs)

    This part defines the predefined levels of assurance, which represent the depth of evaluation and testing for IT products. The higher the level, the more comprehensive the evaluation.

RCII employs the Common Evaluation Methodology (CEM) to assess the security of IT products. This methodology ensures that the security requirements, derived from the specific security objectives, identified threats, and vulnerabilities of each product, are rigorously tested.

Other Standards and Methodologies

In addition to ISO/IEC 15408, RCII incorporates several well-recognized industry standards and best practices during its security evaluation process, including:

OWASP Standards provide comprehensive, structured sets of security controls designed to assess and verify the security of web applications, mobile applications, and other software systems. These standards help ensure that applications meet essential security requirements and protect against a wide range of threats

Web Application:

  • ASVS

    (OWASP Application Security Verification Standard)

  • WSTG

    (OWASP Web Application Testing Guide)

Mobile Application:

  • MASVS

    (OWASP Mobile Application Security Verification Standard)

  • MASTG

    (OWASP Mobile Application Security Testing Guide)

OWASP Top10 Projects represents the most critical security risks to Web Applications, Mobile Applications and APIs.

  • OWASP Top 10:2021

  • OWASP Mobile Top 10:2024

  • OWASP API Security Top 10:2023

Quality Evaluation Laboratory

RCII’s quality laboratory, recognized as a pioneer in software quality testing in Iran, has achieved ISO/IEC 17025 certification, a globally respected accreditation for laboratories conducting testing. This certification underscores RCII’s commitment to excellence and adherence to international best practices in quality assurance.

The laboratory’s software quality testing processes are rigorously aligned with advanced global standards and methodologies, establishing a state-of-the-art testing framework. RCII has played a pivotal role in developing and implementing numerous national standards, significantly shaping the landscape of software quality assurance in the region.

Among the key international standards adopted at RCII are the ISO/IEC 25000 series, also known as SQuaRE (Software Product Quality Requirements and Evaluation), which offers comprehensive criteria for evaluating software product quality. These standards guide the laboratory’s meticulous evaluation processes, supported by advanced testing tools to ensure accuracy, reliability, and full compliance with both contemporary and legacy benchmarks.

By implementing these standards, RCII ensures that the software products it tests meet the highest levels of quality and reliability across the industry.

The following are the key quality tests rigorously conducted at RCII’s quality laboratory:

  • Functional Testing:

    Verifies the system's compliance with specified functional requirements, ensuring each feature operates correctly according to defined use cases and business rules.

  • Performance Testing:

    Analyzes key performance metrics such as response time, throughput, and resource utilization under varying conditions, ensuring the system meets performance benchmarks and performs efficiently under typical and peak loads.

  • Load Testing:

    Evaluates the system’s scalability and performance by simulating maximum anticipated user load, determining its ability to sustain peak concurrent user volumes without degradation in performance.

  • Stress Testing:

    Assesses system resilience by pushing it beyond normal operational capacity, analyzing its behavior under resource constraints, hardware failures, or unexpected spikes in demand to ensure graceful degradation.

  • Volume Testing:

    Tests the system’s ability to handle large datasets by populating it with high volumes of records and transactions, ensuring data integrity and system stability under maximum data loads.

  • Interoperability testing:

    Interoperability testing is the process of verifying that two or more systems, components, or software applications can work together and exchange information or services effectively. The goal is to ensure that systems from different vendors, platforms, or technologies can interact and function as expected without issues such as data loss, miscommunication, or compatibility errors.

  • Security Testing:

    Rigorously examines the system’s access control mechanisms, ensuring that data confidentiality, integrity, and availability are maintained by verifying appropriate enforcement of user permissions and protection against vulnerabilities.

  • Software Architecture Evaluation:

    refers to the process of assessing the design and structure of software systems to ensure they meet quality attributes such as performance, security, scalability, and maintainability. The evaluation helps identify potential risks, areas for improvement, and helps ensure that the software architecture aligns with business goals and technical requirements.

  • User Interface (UI) Testing:

    Assesses the system’s user experience, evaluating ease of navigation, consistency, and accessibility, ensuring that the interface is intuitive and adheres to usability standards for optimal user interaction.

Certificate Authority Applications (CA) Evaluation Laboratory

The Research Center of Informatic Industries (RCII) has developed the national standards and criteria for Certificate Authority applications (CA). These standards have been developed under supervision of Governmental Root Certification Authority (GRCA) of Islamic Republic of Iran. With respect to deep knowledge of our experts at RCII we have started the PKI Lab for testing and evaluating CA applications. This Laboratory is the first and currently unique PKI Lab in Iran that has been approved by e-Commerce Development Center.

Providing practical specifications for CA applications
Providing testing guidelines and procedures for testing CA applications
Providing software and hardware requirements for testing CA applications

Public-Key Enabled Applications (PKE) Evaluation Laboratory

The Research Center of Informatic Industries (RCII) has developed the national standards and criteria for Public-Key Enabled Applications (PKE). These criteria have been developed under supervision of Governmental Root Certification Authority (GRCA) of Iran. Regarding to deep knowledge of our experts at RCII, we have started the PKI Lab for testing and evaluating PKE Applications. This Laboratory is the first and currently unique PKI lab in Iran that the e-Commerce Development Center has been approved.

The most significant achievements of Public-Key Enabled Applications (PKE) Evaluation Laboratory

Providing practical specifications for PKE applications
Providing guidelines and procedures for testing PKE applications
Providing software and hardware requirements for testing PKE applications

Public Key Infrastructure Evaluation Standards:

  • National Criteria

  • FIPS 196

    (Authentication)

  • CAVP

    Cryptographic Algorithms Verification

  • FIPS 140-2

    Tokens

  • CMVP

  • CRL/OCSP

    Checking Certificate Status

  • Certificate Chain Processing

  • Certificate Path Construction

  • SSL

    SSL Key Agreement Verification

  • Digital Signature Verification

  • PKCS#10

  • PKCS#11

  • PKCS#7

  • PKCS#5

Smart Card Testing Laboratory

Today, due to the increasing use of encryption algorithms and encryption capabilities for data integrity and confidentiality, the use of equipment such as hardware encryption modules, hardware tokens, and smart cards will be inevitable. These products, which are used in the secure storage and production of encryption keys, should also be evaluated and conform to their security and functional standards. In smart cards, the following can occur:

Unauthorized manipulation of data stored in the card (Card Skimming)
Unauthorized copies of the integrated circuit embedded in the card (Card Spoofing)
Creating side-channel attacks on smart cards
Smart Card Standards
ISO/IEC 7816
Global Platform
ISO/IEC 15408
Java Card Specification
  • Java Card RE Specification
  • Java Card VM Specification
  • Java Card API Specification
Testing capabilities:
How to present a Java Card-based applet and create Byte Code, classes and supported libraries
Review of the Life Cycle of the smart card, application and Security Domain
Review of communication protocols
Review of APDU commands and responses
Review of hashing and encryption algorithms
Review of the Secure Channel Protocol
Review of file access levels
Review of Digital Signature Standard
Review of Random Number Generation
Functional Security Testing
  • APDU interface testing
  • Access control validation (PINs, authentication)
  • Cryptographic function testing
  • Secure channel establishment (e.g., SCP03)
  • Secure applet life cycle (install, delete, personalize)
Vulnerability Analysis
  • Fuzz testing on APDU commands
  • Logical flaw testing (e.g., bypass authentication)
Penetration Testing (Advanced)
  • Reverse engineering (where possible)
  • Testing for cryptographic weaknesses (e.g., key leakage)
Hardware Security Testing Laboratory

The Hardware Security Laboratory is one of the sub-laboratories of the IT department, specializing in the security evaluation of hardware devices. It focuses on identifying and addressing security threats in electronic products and embedded systems. This laboratory, based on internationally recognized standards, provides security assessment tests in two areas: vulnerability assessment and compliance evaluation based on standard requirements.

Types of products which can be tested:

Internet of Things (IoT) devices
Embedded Devices
Servers
Hardware Security Evaluation Standards

Security evaluation and tests in this laboratory are conducted based on the following international standards:

OWASP-IoT-Top-10
ETSI TS 103 646 (Test specification for foundational Security IoT-Profile)
ETSI EN 303 645 (Cyber Security for Consumer Internet of Things)

Vulnerability evaluation

Vulnerability evaluation is a process in which weaknesses and security threats in hardware devices and assets are identified. The results of this evaluation help security teams, system owners, and other stakeholders analyze vulnerabilities, implement necessary fixes, and enhance the security level of their systems.

IACS Testing Laboratory

The Industrial Automation Control System (IACS) Security Laboratory is one of the sub-laboratories within the IT department, specializing in the security evaluation of Operational Technology (OT) under the title “Cybersecurity in Industry Based on ISA/IEC 62443 standard”.

In this laboratory, security testing is performed in accordance with the IEC 62443 standard, focusing on:

IEC 62443-4-2: Security requirements for IACS components (PLCs, HMIs, RTUs, IIOT, firewalls, etc.)
IEC 62443-3-3: Security requirements for IACS systems (networks, communication, user access, logging, etc.)

IEC 62443-4-2 Based Testing (Component-Level)

This standard defines technical requirements for Software Application Requirements (SAR), Embedded Device Requirements (EDR), Host Device Requirements (HDR) and Network Device Requirements (NDR). These tests validate that technical security requirements for control components and industrial automation according to the Technical Security Requirements (TSRs) defined in part 4-2:

Component Requirement (CR) Description
Authentication & Identity (CR 1)
Ensure users/devices are who they say they are
Use Control (CR 2)
Limit access to functions/data
System Integrity (CR 3)
Protect against tampering
Data Confidentiality (CR 4)
Keep data private
Restricted Data Flow (CR 5)
Control communication paths
Timely Event Response (CR 6)
Detect & react to threats
Availability (CR 7)
Check DoS resistance or system recovery

IEC 62443-3-3-Based Testing (System-Level)

IEC 62443-3-3 defines the cybersecurity requirements at the system level, specifying the technical security requirements for complete IACS systems. These tests validate that the whole system (network of devices) operates securely and meets architectural and operational cybersecurity expectations:

System Requirement (SR) Description
SR 1 – Identification & Authentication
System-wide user management
SR 2 – Use Control
Role-based access control across the system
SR 3 – System Integrity
Configuration hardening and secure boot
SR 4 – Confidentiality
Encrypted communications between zones
SR 5 – Restricted Data Flow
Segmentation using zones/conduits
SR 6 – Monitoring & Logging
Central log collection and event response
SR 7 – Resource Availability
Backup, failover, system resilience

Security Levels

IEC 62443-3-3 (focused on system-level security for the entire IACS) and IEC 62443-4-2 (focused on technical requirements for individual components) both define and utilize Security Levels (SLs) appropriate to their scope.

Security Level Threat Actor Definition
SL 1
Casual or coincidental attacker
Protection against casual or coincidental violation
SL 2
Intentional attacker with simple means
Protection against intentional violation using simple means with low resources, generic skills and low motivation
SL 3
Sophisticated attacker with skills and moderate resources
Protection against intentional violation using sophisticated means with moderate resources, IACS specific skills and moderate motivation
SL 4
Highly motivated attacker with significant resources
Protection against intentional violation using sophisticated means with extended resources, IACS specific skills and high motivation

Vulnerability Evaluation

Vulnerability Evaluation in Industrial Automation Control Systems (IACS)/Operational Technology (OT) is the process of identifying, evaluating, and prioritizing weaknesses (vulnerabilities) in industrial control systems that could be exploited by cyber threats.

Approach for Vulnerability Evaluation in IACS/OT:

  • Planning & Scoping

    Define objectives, identify systems/networks/devices in scope, and involve both IT and OT teams

  • Reporting and Remediation & Mitigation

    Document findings and implement risk-based mitigation strategies

Source Code Security Evaluation Laboratory

In an era where cyber threats are rapidly evolving and targeting systems across various industries, cybersecurity has become more critical than ever. Secure coding is not merely a best practice—it’s a fundamental element of a robust cybersecurity strategy.

This laboratory is committed to identify vulnerabilities at the source level. By analyzing and evaluating the security of code, we assist organizations in preventing costly breaches and building trust in their digital systems.

Evaluation of system software source code
Evaluation of application software source code
Evaluation of web and mobile application source code
Evaluation of firmware for industrial control devices and network equipment
Evaluation of source code for SDKs and web services

By using static and dynamic analysis methods, this laboratory identifies vulnerabilities and security issues in the source code and provides operational solutions to fix and improve them.

Source Code Evaluation Standards

Although the ISO/IEC 15408 standard, does not cover all the requirements for source code evaluation, particularly in terms of secure coding, it is used as the main standard for source code evaluation in this laboratory. Evaluations are conducted at the EAL4 level of the Common Criteria, which assesses not only the product’s security functionality but also aspects such as security architecture and the design and implementation process. The following are the standards, checklists, and authoritative references available for assessing the security of source code used in the laboratory.

Programming language-based standards

  • SEI CERT C Coding Standard

    For the C programming language

  • SEI CERT C++ Coding Standard

    For the C++ programming language

  • CERT Oracle Secure Coding Standard for Java

    For the Java programming language

Common Weakness Enumeration (CWE)

For programming languages that lack a valid secure coding standard, the laboratory uses CWE as a valid reference for categorizing and reporting various vulnerabilities. Since CWE includes a list of major software vulnerabilities, these vulnerabilities are considered evaluation criteria. Specifically, vulnerabilities such as SQL Injection, Cross-Site Scripting, and Input Validation will be examined in the source code.

OWASP Checklist

Another valid reference for source code evaluation is the OWASP checklist, which comprehensively covers various vulnerabilities in software applications. The OWASP Secure Coding Practices Quick Reference Guide includes 14 subsections, each covering an aspect of software security

This checklist includes the following topics:

  • Input Validation

    Validation of data, length, and value constraints

  • Output Encoding

    Use of specific standards for sending output

  • Authentication and Password Management

    Review of authentication algorithms and password management

  • Session Management

    Evaluation of session management processes

  • Access Control

    Review of access grant methods and permission changes

  • Cryptographic Practices

    Evaluation of encryption methods and random key generation

  • Error Handling and Logging

    Examination of error management and log recording

  • Data Protection

    Evaluation of the security of sensitive data

  • Communication Security

    Review of communication security (such as TLS)

  • System Configuration

    Evaluation of system configuration practices

  • Database Security

    Evaluation of database communication security

  • File Management

    Examination of file management security

  • Memory Management

    Evaluation of memory management practices

  • General Coding Practices

    Use of valid and tested code

AI Product Evaluation Laboratory

AI laboratories are crucial specialized centers addressing the rapid growth of intelligent technologies and AI-based tools. With their increasing impact on individuals and society, comprehensive evaluation—covering performance, security, trust, and ethics—is essential. To meet this need, the RCII has launched the advanced AI lab. Utilizing global standards and expert teams, the lab provides structured and reliable assessments, supporting industry demands and promoting the safe and accountable development of AI technologies.

Laboratory Approach

The AI Laboratory has been established with the aim of providing a structured, scientific, and reliable framework for evaluating artificial intelligence systems. The primary goal of this laboratory is to ensure the performance, security, trustworthiness, and ethical alignment of AI-based tools and technologies. To achieve this, the lab adopts a multidisciplinary approach that combines technical assessments with ethical and governance considerations. The evaluation process covers multiple dimensions—including data quality, functional performance, and vulnerability analysis—while emphasizing transparency and accountability. By aligning with international best practices and developing localized methodologies, the laboratory seeks to bridge the gap between innovation and assurance, empowering industries and institutions to deploy AI responsibly and effectively.

Artificial Intelligence Laboratory Capabilities

The Artificial Intelligence Laboratory, leveraging a team of experienced specialists across various AI domains and an outstanding track record in developing security and quality standards, provides comprehensive evaluation services for AI-based products.

Specializations:

Evaluation of machine learning and deep learning systems
Evaluation based on natural language processing (NLP) systems
Evaluation of image processing systems
Evaluation of speech and voice processing algorithms

Our multidimensional evaluation process includes:

Comprehensive technical and quality performance review
Security analysis and vulnerability identification
Efficiency and productivity measurement
Ethical considerations and algorithmic fairness evaluation
Data governance standards assessment
AI Laboratory Standards

Given the emerging and evolving nature of AI system evaluation, there is currently no single, comprehensive, and globally unified standard dedicated solely to this domain. As such, the AI Laboratory at the Informatics Industries Research Center adopts a flexible and experience-driven approach, based on a combination of existing resources and expert insights. The key components of this approach include:

Development of Internal Guidelines:

  • Based on local requirements, global trends, and practical evaluation experience, the laboratory has developed (and continues to refine) its own reference documents for evaluating:

    • Performance • Security • Reliability of intelligent systems.

  • These documents help establish a structured, adaptable, and credible framework for evaluating AI-based products.

Use of Related Standards and Documents:

The laboratory leverages standards from related areas—particularly information security and software evaluation—to build robust assessment frameworks. In this context, Protection Profiles and best practices from existing domains play a key role.

The laboratory incorporates relevant internationally recognized AI standards into its evaluation processes. Among the most important are:

AI Performance and Evaluation Standards

  • ISO/IEC 24027

    Accuracy Metrics for ML Models

  • ISO/IEC TR 24028

    Trustworthiness Metrics for AI

  • ISO/IEC 23894

    Functional Safety of ML Systems

AI Security and Privacy Standards:

  • ISO/IEC 27001

    Information Security Management (including AI systems)

  • ISO/IEC 23894

    Risk Management for AI and Mitigation of Security Threats

  • ISO/IEC 27550

    Privacy Protection in AI Systems

  • OWASP Machine Learning Security Top 10

    Top security risks in machine learning systems, including:

•   ML01: Input Manipulation Attack
•   ML02: Data Poisoning Attack
•   ML03: Model Inversion Attack
•   ML04: Membership Inference Attack
•   ML05: Model Theft
•   ML06: AI Supply Chain Attacks
•   ML07: Transfer Learning Attack
•   ML08: Model Skewing
•   ML09: Output Integrity Attack
•   ML10: Model Poisoning

  • 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps

•   LLM01: Prompt Injection
•   LLM02: Sensitive Information Disclosure
•   LLM03: Supply Chain Vulnerabilities
•   LLM04: Data & Model Poisoning
•   LLM05: Improper Output Handling
•   LLM06: Excessive Agency
•   LLM07: System Prompt Leakage
•   LLM08: Vector & Embedding Weaknesses
•   LLM09: Misinformation / Hallucination
•   LLM10: Unbounded Consumption

  • AI Ethics and Governance Frameworks and Standards

•   IEEE 7000 – Ethical Design in Intelligent Systems
•   OECD AI Principles – Global AI Ethics and Governance
•   NIST AI Risk Management Framework – Risk-Based Approach to AI Security and Fairness

By integrating these standards and continuously developing in-house resources, the AI Laboratory ensures its evaluations remain internationally aligned, locally relevant, and technically reliable.