BIREME Bulletin n. 102

Information Products and Services and AI use

With a view to advancing the use of Artificial Intelligence in its health information products and services and, at the same time, reaffirming its institutional values, such as ethics, transparency, and accountability, BIREME/PAHO/WHO has established a set of guidelines to guide the development and adoption of AI-based solutions within the Center.

The document was prepared based on the internal guidance shared by the Pan American Health Organization (PAHO) in 2024, which establishes ethical principles and recommendations for the responsible use of technology, and which refers to similar publications published by the World Health Organization (WHO); and the United Nations (UN). As a result, the technical manager of the product to be developed must ensure compliance with the guidelines established by the WHO and PAHO regarding the use of language models (LLMs) for health, including the document Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.

At BIREME, the initiative continues the efforts to incorporate AI to improve developments, prioritize innovation routes, and drive advances in products and services. It is based on governance in aspects relevant to AI, such as data protection, human validation, version registration, and risk analysis, ensuring responsible authorship and transparency in the use of Artificial Intelligence.

The guidelines organized by BIREME define criteria for the use of AI to be transparent, secure, and aligned with PAHO/WHO values. Among the key points, the following stand out:

  • Data protection: the insertion of sensitive, internal, or identifiable data into external platforms without adequate contractual guarantees is prohibited.
  • Mandatory human validation: results from LLMs, such as ChatGPT, Gemini, Claude, and Copilot, should be treated as drafts and reviewed by experts before any institutional use.
  • Documentation and traceability: each solution must maintain a documented history, including model versions, revisions, and validations performed.
  • Transparency in institutional use: all AI-based products must explicitly state their limitations, reinforce the need for human validation, and clearly indicate when AI tools were used in content production, ensuring verification of the sources cited.
  • Authorship and responsibility: each solution must identify the person technically responsible for its creation and maintenance.
  • Risk assessment: before adopting any solution, conduct a feasibility and risk analysis that considers ethical, confidentiality, cost, and technological dependency aspects.

With this initiative, BIREME strengthens its commitment to ethics, transparency, and responsibility in the use of generative artificial intelligence in health. “These are guidelines that contribute to information management governance and consolidate an institutional basis for responsible innovation,” stated Marcos Mori, head of development at BIREME.

Leave a Reply

Your email address will not be published. Required fields are marked *