AI Implementation Challenges: Strategic Considerations for Prompt Management, Data Integration, and Organizational Knowledge Sharing

It is not uncommon in professional contexts to encounter significant setbacks that challenge one’s assumptions and capabilities. Such setbacks may temporarily or permanently undermine progress due to overestimation of capabilities, insufficient foresight, or inadequate consideration of alternative perspectives.

For this post also, you can find all files and prompts, on my GitHub account. See https://github.com/bflaven/ia_usages/tree/main/ia_managing_prompts

During the implementation of AI features in my organization, a critical limitation was identified that fundamentally disrupted our planned approach. Specifically, I had been functioning as the sole individual responsible for prompt development, while maintaining expectations that stakeholders would eventually assume greater involvement in this process. This situation has brought the issue of prompt management and distribution into sharp focus, necessitating identification of appropriate tools for prompt storage, sharing, and collaborative testing.

The operational requirements demand a solution that provides greater accessibility and usability than existing platforms such as MLflow. For reference, my previous experience with MLflow implementation is documented in the post Enhance LLM Prompt Quality and Results with MLflow Integration

While data serves as the foundational resource—comparable to crude oil in industrial processes—prompts function as the refinement mechanism that transforms this raw material into valuable output. Although data quality remains significant, prompt optimization appears to hold even greater strategic importance. Prompts represent substantial knowledge repositories with considerable potential for organizational value creation.

The central challenge addressed in this post concerns organizational implementation: “How can an organization systematically ensure the validity of AI-generated content while facilitating collaborative sharing of prompts and development processes to enhance output quality?” In my specific context, working with journalists, editorial quality serves as the primary evaluation criterion—a inherently subjective standard that characterizes many quality assessments.

While data importance cannot be dismissed, evidence increasingly suggests that organizational intelligence and expertise are concentrated within prompt development and optimization. This raises critical questions regarding evaluation methodologies, testing protocols, and progress documentation for teams developing prompt engineering capabilities.

The relationship between data, AI systems, and prompts creates a complex interdependency. The equation “DATA + AI + PROMPTS” represents the optimal framework for organizational AI implementation, requiring effective management of each component. A systematic examination of this framework and targeted approaches for addressing each element would provide the foundation for successful organizational AI integration.

1. DATA

Regarding data management, the responsibility remains with individual organizations; extracting meaningful information from disorganized datasets presents significant challenges in terms of complexity, cost, and resource allocation. These processes are inherently time-intensive and yield uncertain outcomes—characteristics that contemporary business environments find particularly problematic, despite widespread acknowledgment that such limitations are unavoidable operational realities.

2. IA MODEL

AI implementation fundamentally involves considerations of financial investment, data security, and organizational autonomy. The central question becomes whether organizations should accept dependency on commercial solutions, particularly given that most companies lack the resources to develop proprietary large language models. Consequently, organizations must rely on available market offerings, selecting from providers such as Gemini, Claude, ChatGPT, Perplexity, Mistral, and others, while attempting to make informed decisions amid technological uncertainty.

This situation parallels traditional software or Software-as-a-Service procurement decisions, with the critical distinction that AI platforms are designed to address an extensive range of operational requirements. The strategic challenge therefore centers on maximizing the value derived from these licensing investments and ensuring optimal utilization of the selected platform’s capabilities.

3. PROMPTS… and prompts management CMS

The challenge lies in consolidating collective organizational knowledge and expertise into a comprehensive prompt repository that serves the specific requirements of professional disciplines. This approach reinforces the continued centrality of human expertise, despite the widespread availability of profession-specific prompt collections for legal practitioners, medical professionals, journalists, writers, designers, developers, and other specialized fields.

The primary objective remains maintaining human oversight and involvement—specifically implementing a “Human in the Loop” approach—throughout all prompt management processes. This necessitates the deployment of a content management system specifically designed for prompt administration and collaboration.

The underlying user requirements that drive this initiative to document prompt performance and facilitate knowledge sharing have been designated as “prompt_management,” though this concept also encompasses broader Prompt Content Management System functionality. This framework ensures that human judgment and professional expertise remain integral to the prompt development and validation process.

The User story behind the prompts’management CMS prototype

How to build a simple prompt management CMS that perform both locally and as a web application the following operations:

  1. CRUD Prompts: Ability to add, view, and delete prompts where you have system, user, and pseudo variables e.g. {{topic}}, {{language}} … that can be replace into the prompt
  2. Run prompt against some LLM defined in the connectors to validate the output, it can be run with the help of an API key ChatGPT, Claude… that means that you are managing connectors to different existing LLM models.
  3. Categories & Tags: Assign categories and tags to prompts for better organization for instance using tags like image, text, title… to indicate the nature of the prompt
  4. User: Maintain personal entries related to prompts that have created by a user so for instance bruno as a registered user can easily find the prompts he made.

Technically, in python, I prefer Fastapi over Flask for the simple reason that I only know Fastapi.
https://github.com/topics/prompt-management

pip install fastapi sqlmodel uvicorn jinja2 python-multipart
uvicorn main:app --reload

Some sources of inspiration for prototyping the prompt management tool.

Testing some prompts’ management CMS

The following represents an evaluation of commercially available tools that were tested and assessed, along with some experimental implementations developed to address my needs through rapid prototyping methodologies.

1. Promptlayer

In my opinion the best software and the best inspiration for a quick CMS.

It exists some nice video tutorials to get to grip with the essence of the soft.

Some good resources to get inspiration for prompting.

# PROMPT MODEL
System
User

# working with promptlayer

# ai-poet-1 (v1)
System : You are a skilled poet. Write a poem based on user input
User : Please write a poem about {{subject}} in this {{language}}

# ai-poet-1 (v2, update)
System : You are a skilled poet. Write a poem based on user input in the language define by the users. Make sure each poem is an haiku. Don't give any commentary in your response, just output the haiku poem.
User : Please write a poem about {{subject}} in this {{language}}
# create a dataset
# dataset-2-test-poem

Ocean waves | English
Autumn leaves | French
Mountain sunrise | Italian
City lights | Spanish
Childhood memories | French
Desert stars | English
Garden flowers | Italian
Rainy afternoons | Spanish
Ancient ruins | French
Forest whispers | Italian
# Evaluate (Create a new pipeline) 
batch-1-test-poetry
# make a diff
# check with llm : does the poem follow specific haiku syllabe count?
#Choosing the Best AI Model (Create a new pipeline) 
batch-1-best-poetry-model
#Create an agent

format-poem
You are a given a prompt. Please respond with a formatted format.
This means you need to generate a catchy title

# example format
Title:  [POEM TITLE]

[POEM]

2. Promptstore

Another great tool whose baseline corresponds exactly to what I wanted

Prompt Store is like a CMS (Content Management System) for prompts. We think it’s important that prompts be managed separately from code so they are visible, easily refined, and can be measured to improve the performance of AI applications.

3. My own attempt

Why not try making a prototype with Fastapi and Jinja2Templates template ?

You can find the file in “prompt-cms-two” on my github account at https://github.com/bflaven/ia_usages/tree/main/ia_managing_prompts/prompt-cms-two

4. A step by step attempt with strapi

I tried to take advantage of Strapi but without success, too complicated to configure or to make it work when you just want to prototype it, so I sheepishly fell back on the classic technologies of Fastapi for the backend and Jinja2Templates system to build up the webapp.

Source : https://strapi.io/

Like any other all-in-one tool strapi or in a different register https://n8n.io/ are ecosystems in themselves in which one must dive to take full advantage of them. Once this learning effort is made, you will have the possibility of using it either fully or partially but voluntarily and not for lack of understanding.

This situation with strapi reminds me the expression “jack-of-all-trades, master of none”.


# path
cd /Users/brunoflaven/Documents/01_work/blog_articles/ia_managing_prompts/

#check
node -v
npm -v

# Create a new Strapi project:
npx create-strapi-app@latest prompt-cms-one --quickstart
# Choose Walk the Dunes to avoid the connection to create a free Strapi Cloud account

# Set up an admin account
# Open the admin panel in your browser (http://localhost:1337/admin) and:

# First name: Bruno
# Last name: Flaven
# Email: test@test.com
# Password: tesT*testCmsprompt1

# Create a collection type named: Prompt
Prompt
Title (Text, Required)
Description (Rich Text)
Category (Enumeration: General, Education, Entertainment, Technology)
Tags (JSON)
{
  "tags": ["Article", "Journaliste", "Création"]
}


publicationStatus (Enumeration: Draft, Published, Default: Draft)
Rating (Integer, Min: 1, Max: 10, Optional)

# Create a collection type named: Connector

Name (Text, Required)
API_Key (Text, Required, Private)
Model (Text)
Enabled (Boolean, Default: True)
Description (Rich Text)

Source : https://strapi.io/blog/how-to-build-a-crud-app-with-react-and-a-headless-cms

Extra info : A Threat on AI

For example, as with all creations, it’s possible that in the coming months or years, a major security flaw will be discovered in these models, in addition to the fact that these LLMs remain vulnerable. This is evidenced by this rather basic but effective process developed by the Russians.

Indeed, by publishing en masse, Russian trolls have managed to influence the LLMs available on the market using a method not so different from the method advocated by Steve Bannon in “Flood the Zone” to blur the line between truth and/or dilute the importance of information by transforming the media world into a circus and precipitating the attrition of journalistic capacities.

Source : https://www.cambridge.org/core/books/disinformation-age/flooded-zone/388DFBCC7E50B02921023B28E87DD26F

This threat is what is called “LLM Grooming”.

One of the most alarming practices uncovered is what NewsGuard refers to as “LLM grooming.” This tactic is described as the deliberate deception of datasets that AI models — such as ChatGPT, Claude, Gemini, Grok 3, Perplexity and others — train on by flooding them with disinformation.

Blachez noted that this propaganda pile-on is designed to bias AI outputs to align with pro-Russian perspectives. Pravda’s approach is methodical, relying on a sprawling network of 150 websites publishing in dozens of languages across 49 countries.

Source : https://www.forbes.com/sites/torconstantino/2025/03/10/russian-propaganda-has-now-infected-western-ai-chatbots—new-study/

More infos

PROMPT CMS

PROMPT RESSOURCES

WP plugin for IA