The Bharat Pacific Principles
The Durgapur Principles on Mitigating Artificial Intelligence Hype and Enabling Careful Marketing Practices [The Durgapur AI Principles]
January 21, 2025 | Version 1.0
Imperative Classification as per AiStandard.io Alliance Charter, Schedule 1, Part B
Pre-regulatory [Commercial: Market preparation guidelines]
Stakeholder-attribution as per AiStandard.io Alliance Charter, Schedule 1, Part C
Government: [Central Government Ministries, Regulatory Bodies: Principles 1, 5, 9; Consumer Protection Agencies: Principles 1, 5, 10]
Communities: [Industry Consortiums: Principles 4, 6, 7, 8; Academic Institutions/Researchers: Principles 4, 8; Professional Associations: Principles 4, 7; Consumer Advocacy Groups: Principles 5, 10]
Organisations: [Large Enterprises, MSMEs, Startups, Technology Providers, Research Labs, Open-Source Communities, Social Enterprises: Principles 1-10; Media Organisations: Principles 4, 5, 7, 10]
full text of Principles
Risk Assessment in Communication: Before engaging in any marketing communications or knowledge sharing initiatives, organizations must first adopt a risk-centric approach in evaluating their AI technologies. Organisations should:
Evaluate potential risks prior to making claims to ensure that critical decisions are not solely reliant on AI systems.
Clearly define the intended use and target audience of their products from the outset.
Establish human-in-the-loop and human-in-control measures to maintain oversight over AI decision-making processes.
Conduct comprehensive security assessments for AI systems, sharing findings transparently while utilising safe testing practices to ensure system integrity.
Authentic Implementation Commitment: Organisations must substantiate marketing claims with clear evidence of AI implementation by providing essential information to help stakeholders understand the project’s context and its impact on productivity and innovation. Marketing should focus on specific projects aligned with recognised thematic areas rather than generalised, and ambiguous statements.
Demonstrated Maturity of Projects: Promote/ Market only those AI initiatives that have successfully completed pilot phases and can present quantifiable impact metrics. Projects should either be in the process of scaling or already industrialised.
Insightful Knowledge Sharing: Organisations are encouraged to share comprehensive insights related to their AI projects with market stakeholders including:
The broader organisational context and intended purpose of AI initiatives.
Essential skills required for effective project execution.
Degree of human input that may be required.
Robust methodologies for assessing ROI of AI initiatives, supported by empirical data.
Guarantees regarding the veracity, authenticity, and accuracy of datasets used for training AI.
Mitigation of Market Hype Cycles: Organizations must actively prevent artificial hype cycles while ensuring accurate market information about their AI technologies. This involves:
Clearly defining capabilities and limitations of AI technologies, particularly distinguishing between simple machine learning applications and more complex general AI tasks.
Avoiding exaggerated claims that misrepresent potential impact or readiness of solutions, without making false promises on outcomes.
Engaging with context-specific & issue-agnostic stakeholders to provide realistic assessments of expected outcomes.
Educating stakeholders about which tasks can be effectively accomplished using established machine learning techniques appropriate to their requirements.
Encouragement of Iterative Development and Feedback: Adopt an iterative approach in AI project development within marketing technologies by:
Seeking feedback from stakeholders throughout the design phase.
Using feedback to refine ideas and ensure alignment with ethical standards, akin to refining machine learning models based on performance metrics.
Promotion of Realistic Employment Trends: Present a balanced view of AI's impact on employment within the marketing sector, emphasising both job displacement and new opportunities. This involves:
Communicating the evolving nature of job roles due to AI advancements in marketing practices, rather than perpetuating narratives that suggest widespread job loss.
Highlighting upskilling initiatives to prepare the workforce for new roles emerging from AI integration.
Understanding and Contextualising Prompt Engineering: Recognise that prompt engineering, as a practice in AI interactions, should not be conflated with traditional engineering disciplines or specialised expertise. This involves:
Clarifying the Role of Prompt Engineering: Acknowledge that prompt engineering serves as a tool for optimising AI outputs but does not replace nor complement the need for domain-specific knowledge and expertise. It is essential to understand its limitations and the contexts in which it can be effectively applied.
Promoting Collaboration with Domain Experts: Emphasise the importance of collaboration between prompt engineers and professionals in relevant & interoperable fields. This ensures that AI-generated outputs are informed by accurate, specialised knowledge rather than relying solely on prompts.
Avoiding Misrepresentation of Opportunities: Address the tendency to overstate the significance of prompt engineering as a standalone career path. Organisations should promote realistic expectations regarding the value and application of this skill within the broader landscape of AI deployment, ensuring that it limitedly complements rather than detracts from established professional roles
Security and Shutdown Protocols: Ensure that all AI systems utilised in marketing are designed with features allowing for disconnection or shutdown including stopping systems within the Organisation from retaining or processing any kind of personal data. The capacity to deactivate an AI system is essential for addressing significant ethical concerns regarding safety and accountability. Marketing any such solutions must reflect this fundamental principle of responsible technology deployment.
Transparency in AI Value Chains: Organisations must ensure clear and accessible documentation of all components within the AI value chain to foster trust and understanding among stakeholders, thereby enhancing accountability and mitigating misconceptions about AI capabilities.
This includes clearly communicating the limitations and intended uses of AI models, which helps stakeholders understand the contexts in which these solutions are effective or may fall short.
Organisations should provide comprehensive information on data sources, model performance, and operational constraints to support this transparency. Organisations must also ensure that there is clarity regarding the intended purpose, suggested uses, and limitations of their AI systems. This includes providing clear documentation about the data used in training models to foster trust and understanding among stakeholders.
Please note: "Organisations" refers to any entity engaged in the research, development, deployment, marketing, or use of artificial intelligence technologies, including but not limited to:
Micro, Small and Medium Enterprises (MSMEs)
Start-ups and emerging technology companies
Research Labs, including academic institutions, independent research organisations, and scientific/industrial research organizations
Open-Source Communities and Developer Associations
Social Enterprises, including NGOs, Self-Help Groups, charities, and donor organisations
Any other entity that develops, deploys, or significantly utilises AI technologies in their operations