Updated Code of Practice for service providers

Explore top LinkedIn content from expert professionals.

Summary

The updated code of practice for service providers refers to newly published guidelines or rules that help companies offering digital services—like payment processing or artificial intelligence—follow best practices around transparency, safety, governance, and legal compliance. These codes are designed to set clear expectations and ensure that service providers operate in a trustworthy and responsible manner, often aligning with laws like the EU AI Act or banking regulations.

  • Review core policies: Take time to update board structures, documentation, and governance frameworks to meet the new requirements set out in the revised code.
  • Share model information: Make technical details and safety practices accessible to regulators, downstream partners, and affected users through formal documentation and open communication channels.
  • Maintain compliance checks: Schedule regular training, submit required reports, and establish processes for addressing risks such as copyright infringement and cybersecurity threats.
Summarized by AI based on LinkedIn member posts
  • View profile for Ruhaina Razak CA, ACIB, CIMA, ISO Risk Manager Certified

    Manager, Corporate Governance, Risk and Compliance Services| ICFR|Internal Audit|AML/FT|KPMG Ghana

    5,164 followers

    🔔 New Corporate Governance Guidelines for PSPs – What You Need to Know The Bank of Ghana has issued comprehensive guidelines to enhance trust, transparency, and oversight within the digital financial services sector. These apply to all Payment Serive Providers (PSPs) licensed under Act 987, including Dedicated Electronic Money Issuers (DEMIs,) Enhanced Payment Service Providers (EPSPs) and others. The objective of this directive is to foster a strong governance culture that safeguards stakeholder interests and boosts public confidence. As a PSP, what do you need to do ensure compliance? 📌 Here are key compliance actions required before 31st December 2025 To ensure full compliance with the guidelines, Payment Service Providers must: ✅ Review and update board composition to ensure the required mix of non-executive and independent directors. ✅ Develop or revise the Board Charter to define governance roles, meeting frequency, appointment terms, and ethical expectations. ✅ Establish or reconstitute required Board Committees (Audit and Risk & Compliance) with clear charters and qualified, independent leadership. ✅ Appoint or confirm qualified key management personnel, and obtain Bank of Ghana’s approval for all appointments. ✅ Implement formal succession plans for directors and senior management positions. ✅ Design or strengthen internal control systems, risk management frameworks, anti-money laundering controls, and cybersecurity policies. ✅ Submit annual declarations to the Bank of Ghana on regulatory compliance and any material deficiencies identified. ✅ Develop or review conflict of interest policies and formal codes of conduct for staff and directors. ✅ Ensure timely and complete disclosures to the Bank of Ghana, especially relating to ownership, governance, and related party transactions. ✅ Schedule and complete director certification training from an approved institution, such as the National Banking College. #corporategovernance #paymentserviceproviders

  • View profile for Mateusz Kupiec, FIP, CIPP/E, CIPM

    Institute of Law Studies, Polish Academy of Sciences || Privacy Lawyer at Traple Konarski Podrecki & Partners || DPO || I know GDPR. And what is your superpower?🤖

    25,720 followers

    🤖On 10 July 2025, the European Commission officially published the long-anticipated General-Purpose #AI Code of Practice—a non-binding yet influential document that aims to guide providers of general-purpose AI (GPAI) models in demonstrating compliance with the AI Act. Developed over the course of ten months by independent experts and informed by over 1,000 stakeholders, the Code provides a structured pathway to legal alignment in three critical areas: safety and security, copyright, and transparency. It arrives just weeks before the first legal obligations on GPAI models under the AI Act come into effect in August 2025. Although voluntary, adherence to the Code may offer providers a “clear, collaborative route to compliance,” as the Commission puts it, along with reduced administrative burden and potentially lower fines in case of enforcement proceedings. Yet its publication has not been without controversy. Some EU lawmakers claim that last-minute revisions weakened provisions on public transparency and systemic risk oversight. Industry associations have also voiced concern that the Code is too prescriptive in parts, particularly regarding copyright. Despite this, the Commission stands firm: the rules will apply on time, and this Code is the key instrument for operational readiness. The transparency chapter of the Code is particularly relevant for privacy professionals, as it details how providers can meet the documentation and disclosure obligations set out in Article 53(1)(a) and (b) AI Act and related Annexes XI and XII. These obligations apply broadly to all GPAI model providers unless an open-source exemption applies. The core requirement is the creation and maintenance of a Model Documentation Form that includes technical and operational information about the model—such as its intended purpose, limitations, capabilities, and safety measures. The documentation must be updated over the model lifecycle, and previous versions must be retained for at least ten years. Providers must also publish contact information for requesting access to the documentation and respond within specific timeframes to legally grounded requests from the AI Office or national authorities. Information-sharing obligations also extend to downstream providers—typically developers of AI systems using the GPAI model—who rely on this information to meet their own compliance duties. The Code contains safeguards for confidentiality, requiring that authorities and downstream providers protect trade secrets, intellectual property, and sensitive business information. Providers are further encouraged to publish parts of the documentation voluntarily to promote public transparency and trust in AI technologies. Importantly, while the Code doesn’t create a presumption of compliance, it does carry legal weight: under Article 101 AI Act, the AI Office may consider adherence to the Code when determining sanctions. #aiact #privacy #law

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,343 followers

    It’s been a big month in AI governance - and I’m catching up with key developments. One major milestone: the EU has officially released the final version of its General-Purpose AI (GPAI) Code of Practice on July 10, 2025. Link to all 3 chapters: https://lnkd.in/gCnZSQuj While the EU AI Act entered into force in August 2024, with certain bans and literacy requirements already applicable since February 2025, the next major enforcement milestone arrives on August 2, 2025—when obligations for general-purpose AI models kick in. The Code of Practice, though voluntary, serves as a practical bridge toward those requirements. It offers companies a structured way to demonstrate good-faith alignment—essentially a soft onboarding path to future enforceable standards. * * * The GPAI Code of Practice, drafted by independent experts through a multi-stakeholder process, guides model providers on meeting transparency, copyright, and safety obligations under Articles 53 and 55 of the EU AI Act. It consists of three separately authored chapters: → Chapter 1: Transparency GPAI providers must: -Document what their models do, how they work, input/output formats, and downstream integration. - Share this information with the AI Office, national regulators, and downstream providers. The Model Documentation Form centralizes required disclosures. It’s optional but encouraged to meet Article 53 more efficiently. → Chapter 2: Copyright This is one of the most complex areas. Providers must: - Maintain a copyright policy aligned with Directives 2001/29 and 2019/790. - Respect text/data mining opt-outs (e.g., robots.txt). - Avoid crawling known infringing sites. - Not bypass digital protection measures. They must also: - Prevent infringing outputs. - Include copyright terms in acceptable use policies. - Offer a contact point for complaints. The Code notably sidesteps the issue of training data disclosure—leaving that to courts and future guidance. → Chapter 3: Safety and Security (Applies only to systemic-risk models like GPT-4, Gemini, Claude, LLaMA.) Providers must: - Establish a systemic risk framework with defined tiers and thresholds. - Conduct pre-market assessments and define reevaluation triggers. - Grant vetted external evaluators access to model internals, chain-of-thought reasoning, and lightly filtered versions—without fear of legal retaliation (except in cases of public safety risk). - Report serious incidents. - Monitor post-market risk. - Submit Safety and Security Reports to the AI Office. * * * Industry reactions are mixed: OpenAI and Anthropic signed on. Meta declined, citing overreach. Groups like CCIA warn it may burden signatories more than others. Many call for clearer guidance—fast. Regardless of EU regulation or US innovation, risk-managed AI is non-negotiable. Strong AI governance is the baseline for trustworthy, compliant, and scalable AI. - Reach out to discuss! #AIGovernance

Explore categories