April 18, 2024

A Walk into Singapore’s Vision for the Future of AI

Share
A Walk into Singapore’s Vision for the Future of AI Featured Image
This article is the latest in our series on AI Risk Management frameworks, with a specific focus on the understanding and implications of developing these regulatory initiatives. We invite you to explore our previous pieces on the UK’s AI consultation and compliance with President Biden’s Executive Order on AI Governance.

1. Introduction

With the rise of Generative AI, firms are facing increased pressure to implement trustworthy AI systems. This has led to a need for organizations to update their current model governance frameworks to leverage the capabilities of Large Language Models (LLMs) in a controlled manner. In January of 2024, the AI Verify Foundation, in collaboration with the Infocomm Media Development Authority (IMDA) introduced a Model AI Governance Framework for Generative AI. This framework marked the Singaporean government’s first step in expanding their existing AI landscape to address risks specific to Generative AI.

The new framework encourages organizations to align their practices with nine AI governance principles. These principles are consistent with other frameworks such as the EU AI Act (recently endorsed by all EU members), or the NIST AI Risk Management Framework, and they center around widely known areas such as accountability, data transparency, privacy, security, and principles. In addition to addressing these standard AI risks, Singapore’s authorities have flagged the relevance of incident reporting, the use of third-party entities for testing and assurance, and the independent validation of AI systems. The Model AI Governance Framework for Generative AI is the latest initiative as part of a broader AI ecosystem that Singapore began developing in 2019.

In this article, we deep dive into how these initiatives are shaping the national strategy on AI risk management, aiming to help practitioners leverage the wealth of information available within the extensive AI Singapore ecosystem.

2. The National AI Strategy

Singapore’s AI landscape is designed to position the nation alongside leading global AI initiatives, such as President Biden’s Executive Order on AI in the US, the Pro-Innovation Approach to AI Regulation in the UK, or the EU AI Act in Europe. Modeled after similar global efforts and involving extensive consultations and collaborations with international AI firms, Singapore’s national strategy encompasses a governance framework and toolkits that offers tests, reporting guidelines, and industry-derived use cases. These resources are designed to assist AI practitioners apply the Generative AI principles effectively. The goal is to encourage Singapore’s industries to think ahead, and initiate the work in their respective fields to broaden their risk frameworks and address the unique challenges posed by the use of generative AI.

Singapore’s vision for AI is captured through a series of initiatives spanning from 2019 to 2024 (see Table 1). The journey begun in 2019 with the National AI Strategy, aimed at deploying AI across critical sectors including education, healthcare, and safety & security, alongside fostering the AI innovation ecosystem. 2020, saw the introduction of the Model AI Governance Framework, offering ethical and governance guidance for AI deployment in the private sector.

YearInitiativeDescription
2019National AI StrategyImplement AI solutions across national projects in key sectors: education, healthcare, and safety & security. Invest in the development of the AI ecosystem to support innovation within the country.
2020Model AI Governance FrameworkGuidance for private sector organizations on ethical and governance considerations when deploying AI solutions.
2023AI Verify Governance and Testing FrameworkOpen-source AI governance testing framework and software toolkit. IMDA also set up the AI Verify Foundation to use the collective power and contributions of the open-source community to develop AI Verify.
2023Guidelines on Data Privacy in AIAdvise on the use of personal data to develop machine learning (ML) AI models or systems, as well as the collection and use of personal data in such ML systems for decisions, recommendations, and predictions.
2023Discussion Paper on GenAIEvaluate risk assessment methods for the adoption of Generative AI.
2023Veritas ToolkitOpen-source toolkit developed by the Monetary Authority of Singapore, to conduct assessments on Fairness, Principles, Accountability, and Transparency (FEAT) principles within the financial industry.
2024Model AI Governance Framework for Generative AIBuilds upon Singapore’s existing AI governance framework, to specifically address GenAI risks.

Table 1: Timeline of key initiatives launched by Singapore to govern and guide the use of AI in the country.

Three AI Governance Frameworks

Among these diverse set of initiatives are three AI governance frameworks proposed by Singaporean authorities and institutions:

  1. Model AI Governance Framework (January 2020)
  2. AI Verify Governance and Testing Framework (June 2023)
  3. Model AI Governance Framework for Generative AI (January 2024)

The principles of these three frameworks demonstrate significant overlap, as depicted in Figure 1. The boxes colored in gray indicate shared principles across frameworks—such as explainability, transparency, fairness, well-being, and safety. The diagram distinguishes between two principle sets: the gray ones, which align with global AI initiatives, and the colored ones, which are unique to each framework. The latter may reflect Singapore’s specific focus or emphasis on areas such as research and development, content provenance, human oversight, and testing and assurance.

Model AI Governance

Figure 1: AI principles across the three AI Governance frameworks: Model AI Governance, AI Verify Governance and Testing and Model AI Governance for Generative AI.

2.1. Model AI Governance Framework

Singapore’s involvement in AI governance began in January 2019 with the introduction of the Model AI Governance Framework at the World Economic Forum in Davos. This framework was designed to provide detailed and actionable guidance to organizations on key ethical and governance issues when deploying AI systems.

In 2020, an expanded edition of the framework was released, which included expanded guidance and application of the principles using examples of detailed quantitative metrics and the reporting of validation test results across various industries and use cases. The framework suggests five core AI principles for guiding the development, deployment, and management of AI systems, as detailed in Table 2.

PrincipleDescription
ExplainabilityEnsure AI decisions and processes are as understandable to humans as possible.
TransparencyBe open about AI systems’ functionality and management, including data, models, and decision frameworks.
FairnessStrive to eliminate biases in AI, guaranteeing equitable and nondiscriminatory AI systems.
Well-BeingConsider AI’s broader societal impacts, emphasizing its positive contributions to human welfare.
SafetyPrioritize reliable and safe AI operations, minimizing undue risks.

Table 2: The five principles of the Model AI Governance Framework.

To effectively implement AI principles, the framework has identified four key governance areas, as shown in Table 3. These governance areas are designed to help firms better monitor and mitigate AI risks based on the AI principles detailed in the table above.

PrincipleDescription
Risk GovernanceEstablish accountability, compliance, and risk management frameworks for AI systems.
Human SupervisionEmphasize the crucial role of human oversight in AI decisions, ensuring AI complements human judgment.
Operations ManagementManage AI systems, including data handling, model development, and maintenance, with a focus on performance and ethical integrity.
Stakeholder ManagementEngage and communicating with external stakeholders about AI systems, especially regarding AI risks and limitations.

Table 3: The four governance areas of the Model AI Governance Framework.

To ensure that the Model AI Governance Framework aligns with globally accepted AI principles, Singapore has actively engaged in international collaborations with organizations and institutions such as the EU, OECD, and the National Institute of Standards and Technology (NIST). As a result, the framework is designed to be algorithm-agnostic, technology-agnostic, and sector-agnostic, making it a baseline for organizations operating in any sector and use case.

2.2. AI Verify Governance and Testing Framework

In response to the demand for effective AI governance, the AI Verify Foundation has developed AI Verify, a tool designed for the testing and evaluation of AI systems. This move is in line with the global need for examples and references of AI testing capabilities and governance to satisfy both corporate and regulatory requirements.

The Foundation’s initiative is supported by a coalition of technology leaders, including: the Infocomm Media Development Authority (IMDA), Aicadium (Temasek’s AI Centre of Excellence), IBM, Microsoft, Google, Red Hat, and Salesforce. These entities are set to guide the future development of the AI Verify roadmap.

The AI Verify toolkit is designed to integrate into existing company operations. Its key function is to enable users to perform technical assessments on AI models and document the processes thoroughly. This toolkit aims to promote transparency in AI systems, generating reports to facilitate understanding among shareholders and stakeholders. However, it is noted that these reports, while comprehensive, must be contextualized within the broader workings of the AI system for full transparency. Effective model documentation, adhering to best practices, is essential in achieving this goal.

As of April 2024, AI Verify is available as a minimum viable product (MVP). This initial version does not yet include capabilities to assess generative AI or LLMs. However, the Foundation is actively exploring the expansion of the AI Verify Toolkit to include these areas. The Generative AI Evaluation Sandbox, a newly established component of AI Verify, serves as a testbed for real-world evaluations of AI systems. Insights from these tests are instrumental in the ongoing development of the Model AI Governance Framework for Generative AI, ensuring its applicability in the dynamic field of AI.

While the AI Verify testing framework and the Model AI Governance Framework for Generative AI operate independently, their objectives are closely linked. The latter specifically aims to guide Singaporean institutions in using generative AI in compliance with industry regulations, while the former focuses on providing concrete tools for quantitative assessment and reporting. Supporting these efforts is IMDA’s AI Verify framework, which includes 11 AI principles that align with international AI standards. These principles aim to form the basis for evaluating AI systems, focusing on responsible AI practices through the use of industry-standard tests. The coordinated development of these initiatives is expected to significantly contribute to the field of AI, both locally and internationally.

Supporting these efforts is IMDA’s AI Verify framework, which includes 11 AI principles that align with international AI standards (Table 4).

PrincipleDescription
TransparencyAI systems should be open in their operations, allowing stakeholders to understand how they function and are used.
ExplainabilityAI decisions and processes should be understandable by humans, providing clarity on how conclusions are reached.
ReproducibilityAI systems should consistently produce the same results under the same conditions, ensuring reliability.
SafetyAI systems must operate safely under all conditions, minimizing risks to users and the environment.
SecurityAI systems should be protected against unauthorized access and cyber threats.
RobustnessAI systems should be resilient and function correctly even when faced with challenges or changes in their environment.
FairnessAI systems should make decisions impartially, equitably, and without bias.
Data GovernanceThe management of data used by AI systems should be ethical and compliant with relevant standards and regulations.
AccountabilityThere should be mechanisms in place to hold the appropriate entities responsible for the AI system’s performance and outcomes.
Human OversightAI systems should support human decision-making and actions, with adequate human control over their operation.
Well-BeingAI systems should contribute positively to societal and environmental progress and not exacerbate inequalities.

Table 4: Overview of the AI Principles Included in IMDA’s AI Verify Framework.

2.3. Model AI Governance Framework for Generative AI

Through consultations and discussions with various stakeholders, including industry experts and policymakers, IMDA and the AI Verify Foundation have jointly identified nine guiding principles designed to tackle the specific risks emerging from Generative AI. Table 5 presents a list of these principles and their key guidelines.

PrincipleDescription
AccountabilityCreate incentives for all parties in the AI development chain to prioritize end-user welfare. Develop clear guidelines and responsibilities for each stakeholder.
Data IntegrityUtilize only trusted and verified data sources. Establish clear policies for the use of sensitive or contentious data. Ensure transparency and fairness in data usage.
Trusted Development and DeploymentAdopt industry best practices for AI development and evaluation. Implement transparent disclosure policies akin to “food label” transparency. Encourage regular audits and reviews of development processes.
Incident ReportingSet up robust mechanisms for monitoring and reporting AI-related incidents. Create a centralized database for incident tracking and analysis.
Testing and AssuranceEncourage the use of third-party testing and assurance services. Develop and adhere to common standards for AI testing. Support independent verification of AI systems to build trust.
SecurityAdapt existing information security frameworks to include AI-specific concerns. Develop and implement new testing tools for AI security.
Content ProvenanceEnsure transparency in the origins and creation process of AI-generated content. Provide tools and methods for end-users to verify content provenance.
Safety and Alignment R&DInvest in research and development focused on AI model safety. Foster global collaboration among AI safety institutes.
AI for Public GoodPromote the use of AI for societal benefits and upliftment. Enhance AI access and adoption in the public sector.

Table 5: Key Principles of the Model AI Governance Framework for Generative AI.

3. Regulatory Approach and Legislation

Singapore has not yet issued legislation that specifically regulates the general use of AI. As such, there are no direct penalties for non-compliance with AI regulations per se. This is in contrast to countries and regions where more prescriptive approaches to AI regulation have been taken, with significant penalties for non-compliance, such as the EU, US or UK. However, organizations must still comply with relevant laws when deploying AI technology, such as those relating to safety, personal data protection, fair competition, healthcare, autonomous vehicles or finance. Non-compliance with these regulations could lead to financial penalties.

Key laws affecting AI systems include:

  • Data Protection: The Personal Data Protection Act (PDPA) and the General Data Protection Regulation (GDPR) govern AI systems processing personal data. Firms must ensure data safety, conduct pre-rollout testing, implement advanced security measures, and establish incident reporting systems.
  • Intellectual Property: The National Artificial Intelligence Strategy emphasizes the importance of IP rights in AI innovation. Companies must obtain necessary approvals from copyright holders before using their IP to train AI models.
  • Financial Sector and AI: The Monetary Authority of Singapore (MAS) has issued principles to promote fairness, ethics, accountability, and transparency (FEAT) in the use of AI and data analytics. Financial sector firms must align their AI applications with these principles.

4. The Road Ahead

If Singapore aims to position itself among the leading global powers in AI, it must recognize that this ambition does not come without challenges—challenges that other nations are currently facing. Therefore, we believe it is important to share some of the obstacles that Singapore, like other countries, will face in the coming years. Key among these are balancing regulatory pace with rapid technological advancement, ensuring international consistency, effective incident reporting and transparency, and maintaining technological neutrality and flexibility.

To better understand these challenges, we outline several key issues that need careful consideration:

  • Regulatory Pace: The rapid development of AI technologies, including generative AI, often impacts the ability of regulators to adapt, leading to a gap where new AI applications may operate without clear guidelines.
  • International Consistency: Discrepancies in regulatory approaches across jurisdictions can complicate the global deployment of AI solutions and may lead to regulatory arbitrage, where companies choose to operate in regions with more favorable regulatory environments. This inconsistency can undermine efforts to establish common standards and principles that ensure the responsible use of AI technologies worldwide.
  • Reporting Transparency: Effective incident reporting and transparency can be challenging, especially when dealing with third-party AI vendors or handling private data under regulations like the General Data Protection Regulation (GDPR). Ensuring that AI developers and deployers disclose relevant information about their models and operations requires a level of transparency that may be difficult to achieve universally. Additionally, concerns about disclosing proprietary information or sensitive data can further complicate efforts to standardize disclosure practices.
  • Neutrality: Governance frameworks should aim to be technologically neutral, avoiding favoritism towards specific AI technologies or methodologies. This neutrality is important to prevent stifling innovation by inadvertently promoting certain approaches over others. Furthermore, as AI technology continues to evolve, governance frameworks must be flexible enough to adapt to new developments and challenges. This flexibility is crucial for ensuring that regulations remain relevant and effective in addressing the dynamic nature of AI technologies.

5. Conclusions

As the landscape of AI regulation evolves rapidly, staying updated and identifying the applicability and value of each framework present significant challenges. In response, we have compiled the wealth of information available within Singapore’s AI landscape to highlight what we believe holds considerable value for both developers and those in AI governance roles. It is worth mentioning that while guidelines are useful, we must be careful to avoid establishing redundant rules that echo existing principles. We encourage Singaporean authorities to continue developing these valuable AI tools and governance frameworks and welcome the unification of principles and initiatives. We hope that this article helps practitioners navigate the extensive details on principles and tools and equips them with accessible and practical insights to unlock their AI potential.

Interested in learning more? ValidMind is here to help. Click here to connect with us and speak with one of our experts today.

References

  1. Singapore’s A.I.Verify builds trust through transparency.
  2. Singapore’s model framework to balance innovation and trust in AI.
  3. Singapore’s A.I.Verify builds trust through transparency
  4. AI Verify Foundation
  5. Singapore to invest S$1bn in AI over five years
  6. Generative AI: Implications for trust and governance
  7. Biden’s Executive Order on AI Governance: How ValidMind Enables You to Comply.
  8. Contributing to responsible innovation: ValidMind joins AI consultation by UK Government.
  9. EU AI Act: first regulation on artificial intelligence
  10. NIST AI Risk Management Framework

Let's Talk!

We can show you what ValidMind can do for you.
Request a Demo