Rescuing Failed AI Implementations + Practical Explorations with n8n, Ollama & GEO
When Your AI Project Goes Off the Rails: A Survival Guide
Professionally, losing your footing in a digital product project is just… normal. In management speak, we call it losing the “North Star.” This happens to me all the time—as a project manager, as a product owner, you name it. Lucky for me, I’m pragmatic, so there are really only two things to do:
- Figure out why the project is going off the rails
- What can we actually do about this impending disaster?
For this post also, you can find all files and prompts, on my GitHub account. See https://github.com/bflaven/ia_usages/tree/main/ia_using_n8n_io
This is basically my quarterly ritual in the project I’m running: implementing AI within the company I work for.
The audio extracted from this article made with NotebookLLM3>
AIaaS™: AI as a Product
I thought it was funny to create an acronym: AIaaS™ (with lowercase ‘aa’)—visually distinct, mirrors the whole SaaS/PaaS/IaaS naming thing.
Although I personally consider AI a product, I keep hearing this: AI is not a digital product like any other. Not like a mobile app, a website, or a CMS, and therefore not subject to the same practices and requirements. What does that even mean, “not a product”? In what way? Because it’s relatively new? Yeah, debatable.
In the implementation process, this quickly becomes an organizational problem, not a technical one. AI, like any “disruptive” technological innovation, reshuffles who knows what. It speeds up digital transformation by cutting through the BS* and exposing self-proclaimed experts. For an organization, it’s basically a litmus test for how mature you are at managing creativity—a real measure of your resistance to change, more than any technical challenge.
* Ahem. It means “cut through the bullshit”! In other words: get straight to the point.
Anyway, these are the “human” reasons that sometimes make me lose my strategic perspective on AI. Here’s my list—not exhaustive, and everyone will recognize themselves here, me included: contradictory orders, intellectual laziness, comfort zones, procrastination, etc.
Back to Practice
Once again, no matter how perceptive you are about the causes, you don’t achieve much with intuition—you need action. Remember the quote of Thomas Edison.
Vision without execution is hallucination.
That’s why I always return to this blog to demonstrate my practice in IA—writing posts has become a discipline where I can clarify my own experiences and identify potential pitfalls for future projects! What a suck-up I am!
But let’s get back to the fun part: is AI a product or not? Let’s be honest though: unlike a classic digital product like a web app, AI does have some additional and previously unknown dimensions. It doesn’t just disrupt usage patterns—it radically transforms them. When you’re doing generative AI, since AI works through mimicry, it always generates content (code, text, images) faster and sometimes better than humans would. You can’t deny that this mechanically competes with all of us content producers (product owners, developers, journalists… the list goes on forever).
Once you’ve acknowledged these disruptions, what can you actually do to fix things and get an AI project back on track?
In my view, the first step toward “recovering” a “sick” project—without even talking about “curing” it—is having the honesty to assess the situation as objectively as possible so you can start moving again. One piece of collateral damage: my relative silence on this blog. The last post was from June 25, 2024. Three months without writing anything is a lot by my usual publishing rhythm. To break out of a stuck situation, what better “remedy” than to write it out—i.e., verbalize the obstacles, doubts, even fears as objectively as possible, without pointing fingers or blaming the entire world.
Designating the guilty party is the most human reaction, but it’s taking the easy way out with the guaranteed risk of burying the project for good. Sometimes that’s necessary. There is indeed an immediate gain in recognizing definitive failure: “fail fast, fail cheap”—knowing when to cut your losses. Full awareness through writing isn’t enough though. It also needs to combine with adjustments, brutal or delicate—together they might do the job.
So that’s the first part of this post. The second part is more operational: using Ollama and several LLMs, exploring n8n for automation POCs with AI—topics that have been on the top of my mind lately.
Now that we’ve established the principle that “acknowledging your mistakes or failures is essential for progress,” let’s apply it to my recent situation of being stuck.
What Is My Real Mission?
My mission these days is to assess editorial quality and improve it using the trio: LLM, prompts, and functional improvements—whether one at a time or all at once. As for the functional improvements, I don’t really have control over those since I provide an API that gets consumed in a context I don’t totally master.
I work on generative AI quality with people whose job is, let’s say, to produce text that makes sense. My god, it’s complicated.
The quality of generative AI is undoubtedly linked to the prompt and the models you use, but it’s also tied to technical and functional improvements you can implement. I’ve done enough development (CMS, mobile apps, websites) to know that UX is crucial for feature adoption—and that has absolutely nothing to do with AI.
Let me explain with a user story: imagine you want to generate a title with AI. The function is a combination of AI, sure, but mostly the ability to interact “on demand” or “retry”—to reject a suggestion if needed and get another one.
So this functionality has more to do with your web application’s ability (a CMS, for example) to implement this feature and re-trigger your prompt than it does with choosing the model or evolving the prompt itself.
Basically, I’m right at the heart of the difficulties of AI-driven digital transformation. Introducing AI is, in my view, an additional step in digital transformation—it’s almost a tautology to state it like that.
In its simplest form, it’s ultimately just the automation of white-collar work and all the jobs I’d call cognitive (consultants, managers, lawyers, developers, journalists, product owners, doctors, etc.).
This brings up the real subject of this post: AI and automation. Because if you move to AI, you pretty quickly move to automating first the secondary tasks, and then why not the main ones?
Always interesting to name things because suddenly you become aware of them.
AI and Automation in Historical and Economic Perspective
A short philosophical digression on the confused perception of what is or isn’t actually happening with AI.
The constant reinvention of capitalism coupled with technology questions the more or less established fundamentals of society: work, sure, but beyond that our relationship to culture, learning, politics—in short, our presence and use in the world. No wonder the “geniuses” of tech, the “feudals,” the “tech lords” want to put an efficient system in place instead of democracy and universal suffrage—cold rationality instead of uncertain individuals, stuck in their hesitations and contradictions for all aspects of human life.
I’ll let you discover the sectors that are getting or will get crushed by the algorithmic economy—a real massacre, necessary for some, unthinkable for a handful of diehards, inevitable for the majority of us who just endure it.
Historically, we’re seeing this ultimate dismantling attempt that would plunge us in every sector into a neo-liberal paradise or hell toward a supposed atomicity of supply and demand. Except this atomicity of supply is a decoy because it’s thwarted by the natural monopolistic mechanism of platformization. Note that it’s all written in the “platforms” slug of Palantir’s URLs. Peter Thiel, the puppet master, demiurge of JD Vance, follower of René Girard’s mimetic theory and Ayn Rand.
You only lend to the rich, as they say! So to the ultra-rich, it’s almost indecent.
There’s a lot of talk about The Tech Oligarchs, The Technofeudalism. Ultimately, Thiel is probably a Republican political entrepreneur just as Soros is for the Democratic camp! Both have intellectual ambitions and colossal means.
A quick word to come back down to earth: Marlene Engelhorn, granddaughter of the creator of the chemical company BASF, 30-year-old Austrian heiress:
“The ultra-rich think they’re more competent than the rest of the population, simply because they can pay to see their ideas applied, in politics as in economics.”
The ultra-rich believe they’re at the center of society when they’re actually on the margins.
Personally, I prefer Gilbert Thiel, a namesake.
Let me explain this platformization mechanism. Imagine you’re looking for work—either to change jobs or just looking for one.
Today, is it even conceivable not to have as a candidate: a LinkedIn profile, a mobile phone, a social media presence verified by active accounts? It will be the same for AI. I’ll crudely summarize my thinking: “If you don’t have an AI agent, you’ll soon be nothing.”
Or as I recently heard from an actor who told me: “You better deal with AI before AI deals with you.”
On political perspective : with AI, we’re swimming in nudge theory where “indirect suggestions can, without forcing, influence motivations and encourage decision-making by groups and individuals, at least as effectively as direct instruction, legislation, or enforcement.”
It’s not mandatory to have all the “minimum” attributes of a candidate, but not having them puts you on the margins or literally excludes you from the job market.
Take any other situation—it’s the reality of the market that shapes you as an individual, not the other way around.
I also have a more mundane explanation: I’m getting “old” in the digital world and I’m only just discovering that I’m being left behind… well, not completely.
PS: With AI, I’ve also personally realized that I’m gradually degrading my language skills and cognitive abilities. If AI is choosing the words, it’s choosing your expression—since AI takes control and writes in your place. AI dictates what you write, therefore what you think, no? So I’m trying to learn to type professionally like a typist, otherwise I’ll probably no longer even be able to write, and my job will consist of waiting patiently and stupidly for AI to finish typing my ticket, my summary, my email, or my bit of code.
Some Feedback on Using AI for My Day-to-Day
I’ve said it several times on this blog: “I ain’t a specialist, I am a user.” So, just for the record because it’s funny to think that in 6 months, in a year no doubt, my practice will have radically changed but here are the trends:
In my practice, I’m clearly seeing the collapse of search engine usage. Like everyone else, I don’t go to Google.com anymore, occasionally to duckduckgo.com. I use the “ask AI” option whenever possible on mlflow.org, GitHub, Le Monde with Perplexity… major shift in usage indeed. Search engines are dead, long live the LLMs.
I use Claude, Mistral, ChatGPT, Deepseek, Perplexity, and a bunch of open-source LLMs on a daily basis—for writing, coding, debugging, researching. They’ve become as mundane as opening a browser tab. And here’s the thing: thinking my opinion on which LLM is “better” has any real value is pretty ludicrous. My usage is anecdotal, my needs are specific, and the landscape shifts every few weeks. What works for me today might be obsolete tomorrow, or just completely irrelevant to your use case.
It’s time to really move on to exploring practical cases with n8n and LLMs, without forgetting the MCP concept. Show me some code dude.
Because ultimately, it’s all well and good to theorize, compare models, pontificate about digital transformation—but if you don’t get your hands dirty, if you don’t code, if you don’t actually automate something concrete, you remain in abstraction. Practice is what anchors understanding. It’s by tinkering with n8n, chaining API calls, testing shitty prompts that you really understand what works or doesn’t. Less talk, more POCs. Fewer slides, more terminal.
Example #1: Transforming Claude into an n8n Expert with n8n-MCP
A very interesting and smart combination with Claude, n8n and MCP. It strongly resonates with my actual concerns and interests and leverages what I learned from MCP and Claude in one of my previous posts: Maximizing Claude AI: Desktop App, MCP, Agents & Presentation Generation Guide.
A Model Context Protocol (MCP) server that provides AI assistants with comprehensive access to n8n node documentation, properties, and operations. Deploy in minutes to give Claude and other AI assistants deep knowledge about n8n’s 525+ workflow automation nodes.
To make the tutorial from YouTube on n8n-MCP. It will require activating some API keys.
# put your own credential for Claude # 001-n8n-key-for-claude sk-ant-api09-xxx
# put your own credential for Google Sheets API # MY_full_api_key_1 (Google Sheets API) UYkjkYTvvSFSBO-xxx # Client ID 343820906095-xxx # Client secret GOBQPT-xxx
Task: Learn Brazilian Portuguese Status: TODO Description: I intend to learn Brazilian Portuguese and then go to Brazil in the coming year 2026 Deadline: 2025-09-24
If you are encountering difficulties connecting Google Sheets to a Self-Hosted Version of n8n, here’s a short and good video. You will need to go to https://cloud.google.com/cloud-console
https://www.youtube.com/watch?v=mH1Hgn-JnZw
Some other resources that could be useful:
- https://n8n.io/integrations/claude/and/google-sheets/
- Connecting AI Agents to n8n with the Model Context Protocol (MCP)
- Social Media Intelligence Workflow with Bright Data and OpenAI
# PROMPT_1 in Claude to create an n8n workflow with or without n8n-MCP Please create an n8n workflow that checks Hacker News every hour and fetches all the articles published today, then connects to a Google sheet called "hacker_news" to fetch the articles from today that we already have, compares the two sources and saves only the new ones.
# install n8n-MCP # Prerequisites: Node.js installed on your system # Run directly with npx (no installation needed!) npx n8n-mcp
Example #2: AI-Powered Restaurant Review Analyzer
A great tutorial for this use case: https://github.com/tsehowang2/AI-Powered-Restaurant-Review-Analyzer/tree/main
An n8n workflow that automatically analyzes restaurant reviews in multiple languages and provides detailed scoring across 5 key points: Food, Service, Environment, Value, Overall. Built with Ollama LLM models for accurate sentiment analysis and translation capabilities.
# Single Comment Analysis The pasta was amazing but service was slow # Multi-Comment Analysis Great food but expensive 個侍應態度好差 Terrible experience, food poisoning Beautiful view, worth the price
Example #3: A Chat with Ollama in n8n
A great tutorial for a simple chat: https://www.hostinger.com/tutorials/n8n-ollama-integration
# a query Who is the owner of palantir.com?
Installing & Using n8n with Ollama
In order to use n8n, you can check the official website and there are plenty of integrations, including one with Ollama. You can browse the sites below. I have also gathered below all commands that are used in Terminal.
- https://n8n.io/
- https://n8n.io/integrations/
- https://n8n.io/integrations/ollama-model/
- https://ollama.com/
- https://github.com/ollama
Commands for Ollama
A quick survival guide for Ollama commands.
# ollama serve and the command to check if ollama is running http://localhost:11434/ # launch the server ollama serve # like other ps commands that list processes, the ollama ps command will list running models. ollama ps # run models ollama run phi3.5:3.8b ollama run neoali/gemma3-8k:4b ollama run gemma3:4b ollama run deepseek-r1 ollama run phi3.5:3.8b ollama run gemma3:4b (gemma3:latest) ollama run embeddinggemma ollama run gemma3:1b ollama run gemma3n ollama run phi3:14b # remove models ollama rm phi3.5:3.8b ollama rm neoali/gemma3-8k:4b ollama rm gemma3:4b ollama rm deepseek-r1 ollama rm mistral-small:22b ollama rm embeddinggemma ollama rm gemma3:1b ollama rm gemma3n ollama rm phi3:14b
Testing LLM locally with Ollama
I have selected several models but I have focused my interest on these 2:
- gemma3n:latest (7.5GB): Gemma 3n models are designed for efficient execution on everyday devices such as laptops, tablets or phones.
- life4living/ChatGPT:latest (2.0GB): Use OSS 20B ChatGPT Open Source Model
# If version 1.111.1 of n8n is required just run npm install n8n@1.111.1 # to install node brew update brew upgrade node # get some info brew info node # check the version node --version # find the node used on your machine which node # create the shortcut for the console export PATH="/opt/homebrew/bin:/usr/local/bin:$PATH" # or edit manually at your own risk code ~/.zshrc source ~/.zshrc # change the node path if needed /Users/[username]/.nvm/versions/node/v20.18.0/bin/node
# get to the path where you will install n8n cd /Users/brunoflaven/Documents/01_work/blog_articles/_ia_using_n8n_io # check your computer to install n8n node --version npm --version # install n8n globally npm install -g n8n # install n8n dev npm install n8n # update and fix n8n npm update n8n npm audit fix # launch n8n npx n8n # create a "simple" test account as admin if you use n8n locally for instance # The pwd test@test.com # Password: Test*12345 # requirements # Terminal is used for all commands.
Fix a common issue for n8n with Ollama
export OLLAMA_HOST="0.0.0.0" ollama serve # If you want ollama to listen on all network devices on your machine then you'll need to set the following environment variable: export OLLAMA_HOST="0.0.0.0" # Then just start ollama with: ollama serve. # There is a nice FAQ on how to run ollama as a server instance: https://github.com/ollama/ollama/blob/main/docs/faq.md # Also if your ollama server is behind a firewall (NAT etc) you'll need to port forward the port from your firewall/router to your machine.
Edit config from n8n
# change the rights chmod 600 /Users/[username]/.n8n/config # edit using VS Code code ~/.n8n/config
# Access to n8n # Editor is now accessible via: # http://localhost:5678 # Press "o" to open in Browser.
# to connect Ollama to n8n you will probably use these URLs locally http://localhost:11434/api/generate http://localhost:11434/api/chat http://localhost:11434/api/chat
Example #4: Querying Ollama
A very basic idea: to make it easier to manage and run IAG tests on a dataset.
Some Other Stuff
A small number of nuggets discovered in recent weeks not directly related to this post but important to keep.
1. Using string.com
For the laziest among us, you can try string.com to write your own AI agent to automate as many of your tasks as possible.
What do you want to automate? Prompt, run, edit, and deploy AI agents in seconds
https://string.com/
Prompt Example from string.com
Build an agent that gets a random Pokemon from the PokeAPI, uses AI to generate a 5–7 sentence story about it, and posts it to Slack each morning at 9am.
2. Generative Engine Optimization – GEO (Hostinger)
Hostinger is a web host that offers everything from traditional hosting for a WordPress blog to more sophisticated options like self-hosting n8n. I’ve had the opportunity to test this web host, and its blog is also an excellent source of information. For those of you wondering how to do “Generative Engine Optimization,” here are some answers:
How do I create an LLMs.txt file? This file, similar to robots.txt, apparently allows AI bots to explore, understand, and interact with your site’s content in order to appear in LLM answers.
- llms.txt Validator
https://llmstxtvalidator.org/ - The /llms.txt file
https://llmstxt.org/
llms.txt a mock example:
# Title > Optional description goes here Optional details go here ## Section name - [Link title](https://link_url): Optional link details ## Optional - [Link title](https://link_url)
Source: https://llmstxt.org/
A more sophisticated example at https://www.fastht.ml/docs/llms.txt
Web2Agent in Hostinger
Again and again, to keep pace with the AI revolution, a feature developed by Hostinger that makes it easier for AI to ingest your site on Hostinger, for example, works very well with Claude using an MCP protocol. Well, well, how curious it is 🙂
The baseline is unambiguous:
Web2Agent is an experimental feature developed and operated by Hostinger. It transforms your website into a fully AI-compatible agent that can be easily discovered, understood, and accessed by AI tools. It currently works best with Claude, Cursor and tools supporting MCP protocol and we’re working on integrating it with ChatGPT, Gemini, and other autonomous AI agents.
Source: https://www.hostinger.com/support/11729400-web-2-agent-in-hostinger/
3. A New Browser Under AI Steroids: Comet from Perplexity
So, the “browser that works for you”. Perplexity has really decided to take on search engines like Google and its Chrome browser. They’ve just launched Comet, a sort of browser backed by Perplexity’s AI that will serve as your assistant. Comet takes further the LLM integration on a desktop or a mobile. It is a step up in terms of ambition compared to Claude Desktop app. I haven’t tried it yet, but we’ll see.
Comet uses Perplexity’s search engine, optimized for fast and accurate answers. Perplexity search gives you the choice to navigate the web and check sources for original facts.
Source: https://www.perplexity.ai/comet
More infos
- Using Ollama and N8N for AI Automation – YouTube
https://www.youtube.com/watch?v=VDuA5xbkEjo - Best apps & software integrations | n8n
https://n8n.io/integrations/ - Title Not Found
https://raw.githubusercontent.com/n8n-io/self-hosted-ai-starter-kit/refs/heads/main/n8n/demo-data/workflows/srOnR8PAY3u4RSwb.json - GitHub – voratham/poc-n8n-with-simple-gen-content-workflow: The repo is test n8n workflow simple with docker + ollama + llama:3.2:3B model
https://github.com/voratham/poc-n8n-with-simple-gen-content-workflow/tree/main - GitHub – akshaymehare00/seo_optimization_workflow: AI-powered SEO optimization workflow for n8n that automatically generates optimized titles and meta descriptions from database content using OpenAI, Hugging Face, or Ollama models.
https://github.com/akshaymehare00/seo_optimization_workflow - AI-Powered-Restaurant-Review-Analyzer/workflow.json at main · tsehowang2/AI-Powered-Restaurant-Review-Analyzer · GitHub
https://github.com/tsehowang2/AI-Powered-Restaurant-Review-Analyzer/blob/main/workflow.json - n8n Evaluation quickstart – YouTube
https://www.youtube.com/watch?v=5LlF196PKaE - How To Install N8n Locally On Windows, MacOS, And Linux
https://mehulgohil.com/blog/install-n8n-locally/ - Descubre 8n8, tu asistente de flujos inteligentes – YouTube
https://www.youtube.com/watch?v=6QftDPX3qKE - n8n RAG Masterclass – Build AI Agents + Systems that Actually Work – YouTube
https://www.youtube.com/watch?v=75lwkzFxyLs - This n8n mcp is INSANE… Let AI Create your Entire Automation – YouTube
https://www.youtube.com/watch?v=xf2i6Acs1mI - I Built A Fully Local AI Agent with GPT-OSS, Ollama & n8n (GPT-4 performance for $0) – YouTube
https://www.youtube.com/watch?v=mnV-lXxaFhk - AI-Books/AI Engineering_ Building Applications With Foundation Models by Chip Huyen (1).pdf at main · PratyushDS/AI-Books · GitHub
https://github.com/PratyushDS/AI-Books/blob/main/AI%20Engineering_%20Building%20Applications%20With%20Foundation%20Models%20by%20Chip%20Huyen%20(1).pdf - GitHub – chiphuyen/aie-book: [WIP] Resources for AI engineers. Also contains supporting materials for the book AI Engineering (Chip Huyen, 2025)
https://github.com/chiphuyen/aie-book/tree/main - STOP Taking Random AI Courses – Read These Books Instead – YouTube
https://www.youtube.com/watch?v=eE6yvtKLwvk&t=238s - Palantir Foundry
https://www.palantir.com/platforms/foundry/ - Palantir Foundry Is 5–10 Years Ahead of Every Other Data Platform | by Sainath | Data Engineer Things
https://blog.dataengineerthings.org/what-palantir-foundry-taught-me-about-building-better-data-systems-407e3768d5fc - JD Vance
https://en.wikipedia.org/wiki/JD_Vance - René Girard
https://en.wikipedia.org/wiki/Ren%C3%A9_Girard - Mimetic theory
https://en.wikipedia.org/wiki/Mimetic_theory - Ayn Rand
https://fr.wikipedia.org/wiki/Ayn_Rand - Slavoj Žižek – Clarín.com
https://www.clarin.com/autor/slavoj-zizek.html - JD Vance como censor supremo: batallas y acuerdos entre la nueva Derecha y la Izquierda Woke, según Slavoj Žižek
https://www.clarin.com/revista-n/jd-vance-censor-supremo-batallas-acuerdos-nueva-derecha-izquierda-woke-slavoj-zizek_0_fnZs4gYFMd.html
Automation for white collar jobs (n8n)
Books on AI
Foundry, Palantir, Thiel
Few words on Thiel, JD Vance by Slavoj Zizek