The ‘Green’ AI Myth: Carbon Costs of ML Models & Insights on GEO and DeepSeek Fine-tuning

I’ve been deeply immersed in AI’s role in our daily lives, exploring several interconnected themes. My recent focus has been on Generative Engine Optimization (GEO) and the environmental impact of AI systems, particularly their carbon footprint. I’ve been researching methods to evaluate and select AI tools based on their environmental performance metrics, with a special emphasis on developing KPIs for measuring AI-related carbon emissions. Additionally, I’ve been investigating how to analyze the broader corporate social responsibility (RSE) implications of AI implementation, while also exploring DeepSeek’s fine-tuning capabilities for custom AI model development using Python.

For this post also, you can find all files and prompts, on my GitHub account. See https://github.com/bflaven/ia_usages/tree/main/ia_kpi_carbon_footprint_deepseek

The audio extracted from this article made with NotebookLLM

The Big Dive : The ‘Green’ AI Myth: Carbon Costs of ML Models & Insights on GEO and DeepSeek Fine-tuning

1. Generative Engine Optimization (GEO)

We talked about SEO when search engines dominated the world, sound like Jurassic Park, we will now talk about Generative Engine Optimization (GEO). Indeed, the appearance of conversational chatbots changes the paradigm of what is commonly called “web search” on two main aspects.

  1. First, the only search action has changed, you no longer enter a few words, with for the most experienced among you, possibly Boolean operators, you enter a real search in natural language, however imperfect it may be.
  2. Second, you are no longer querying a “catalogue” of web resources where you will then have to sort and assess the relevance of the elements returned, but an intelligence resulting from the “compression” of this same catalogue. In short, this strangely reminds me of a dunce who, before a knowledge test, questioned the top of the class, obviously bespectacled, who has read and indexed everything for you.

This is not only a change in usage, from a business model point of view, it is also a full-scale attack on the link economy (SEO) and what made Google’s fortune, the sponsored link. Without realizing it, this irruption of AI and chatbots is a “creative destruction” in the Schumpeterian sense of the term.

If you have had the opportunity to use Perplexity or Copilot, you must have noticed the links that appear at the bottom of the response. For the moment, these are supposedly relevant links from the corpus that was used to train the LLM. Yeah, yeah… I’m not sure that these same links appearing at the bottom of the response to your prompt will remain solely the disinterested fruit of knowledge but rather the fruit of a targeted purchase by companies that want to promote their content. In any case, that’s what I will do; reinvent Google Ads but in Perplexity. It seems to me that the founders are indeed threatening!

Finally, if we apply this implacable logic to social networks, it is less obvious but there are signs that do not deceive. Why then does META now provide LLMs because Zuckerberg’s company has siphoned off all the content of its users to “compress” the intelligence in an LLM.

It is also the announced end of social networks, no, let’s say rather their reinvention!

The goal of social networks is maximum retention, so FB’s wish is to make you a personalized chatbot that is all about you, a kind of personalized RAG that has taken all the elements posted by yourself and your close circle, therefore an LLM with an absolute cognitive bias where the other does not exist having been definitively expelled.

The tasty paradox is that you are alone in the middle of everyone, as with your phone. Nothing and no one will come to contradict you anymore, or displaying your singularity can only be done in otherness.

The extreme solipsism proposed by AI is doomed to failure. The exteriority and resistance of reality are a given of my experience itself, but AI says almost yes to everything, all the time! Every human being knows, without wanting to know, that he is the only one to exist and yet others exist like him. Each consciousness only lives its own experience

2. IA, the ultimate Carbon Footprint

20 years ago, the U.S. warned of Iraq’s alleged ‘weapons of mass destruction’ but the lie that is coming about AI is beyond comparison in its scale and intensity. How can we live with such double talk? Lying by omission or by intention.

Being reasonably pessimistic, how will the emerging AI market not destroy nature, jobs and finally people themselves.

I had mentioned in a previous post how AI was an “Energy Hog”.
Well, despite the fact that I am aware of the major ecological impact of AI, I am caught like everyone else in the injunction both professionally and personally in the race for AI where ecological considerations take second place.

I can’t find anything better than generating scripts, texts, images with AI. In short, I am in the middle of this paradox of modernity where the obligation to accompany or even accentuate at my level the carbon footprint in complete contradiction with some of my convictions, more ethical. The same goes for the ecological impact as for the social impact for example, I also know that the introduction of AI will destroy jobs without doubt without creating as many as expected.

At a recent conference on AI that I attended as an auditor, in addition to having met old acquaintances with pleasure, I had the French-French microcosm of journalism and the media.

The difficult choice that was discussed between two subjects: the fight against disinformation debunk, fake news, post-truth and concern for the planet CSR, the survival of the planet, the melting of glaciers, the announced destruction of the planet.

A dilemma that could be summed up in its modern version in hashtag mode by the opposition #SaveThePlanet vs #SaveMyJob.

This reminds me of a phrase from the correspondence of the poet Arthur Rimbaud, which describes very well the inevitability of the bad choice.

“The most likely thing is that we go where we don’t want to go, and that we do what we don’t want to do, and that we live and die quite differently from what we would ever want, without any hope of any kind of compensation.”

Arthur Rimbaud. Correspondence from Aden, January 15, 1885.

“Le plus probable, c’est qu’on va plutôt où l’on ne veut pas, et que l’on fait plutôt ce qu’on ne voudrait pas faire, et qu’on vit et décède tout autrement qu’on ne le voudrait jamais, sans espoir d’aucune espèce de compensation.”

Arthur Rimbaud. Correspondance d’Aden, 15 janvier 1885.

No doubt to redeem myself morally, like a true hypocrite, in the arbitration that I must make for the company that employs me on the choices of AI, I have culpably revalued in the decision matrix on AI the CSR or social impact criteria. By giving the latter a weighting as important or even stronger than the other more objective decision-making criteria. It is the price of my conscience in a way to counter the economic logic that prevails despite everything. To be totally transparent, the decision-making criteria that I use to challenge AI are symbolized by the following keywords: Data security, data confidentiality, sovereignty over models, cost control, editorial validation, multilingualism, fine-tuning of models.

To signify energy consumption, CO2 emissions. As for a house, I wondered how to represent for example a breakdown of AI functionality services in the manner of the energy class table of a house. The famous “energy consumption” or “energy performance of the house”.

# FRENCH
CLASSE;DÉFINITION
A;économe
B;basse consommation
C;haut performance
D;moyenne basse
E;moyenne haute
F;énergivore
G;trés énergivore

# ENGLISH
CLASS;DEFINITION
A;economical
B;low consumption
C;high performance
D;medium low
E;medium high
F;energy-intensive
G;very energy-intensive

You can check the prompts and the IA-generated images. See https://github.com/bflaven/ia_usages/tree/main/ia_kpi_carbon_footprint_deepseek/ia_carbon_footprint_visualization

And as a good hypocrite, I used AI to refine the thinking and generate images to illustrate this ecological thinking.

I am like this fervent feminist, whom I know, who defends radical feminist positions in public and behaves like a domestic tyrant and macho at home with his wife and children. This is the hard human condition: how to align one’s actions and values?

Currently, the world is full of these hypocrites, “tartuffes”, “judas” call what you want.

This also raises questions about human psychology, “Why do people choose “idols” that are completely opposed to their values?” I could cite a few examples of these contradictions but that would risk being too controversial, and I hate conflict…

Does this mean that the “idol” is there to be the vehicle of my bad conscience or to redeem my behavior? In “The Twilight of the Idols” Nietzsche says this sentence: “Even the bravest among us rarely has the courage to do what he really knows…”

But let’s get back to this crucial ecological problem posed by AI and this obedient way that we all have of responding to the injunction to novelty (to make AI) while the house is burning (ecological emergency) to use the words of the late J. Chirac

I had the chance to attend a conference by Tristan Nitot, here are the main elements that I understood and that were presented. The topic was the major ecological challenge represented by the exponential use of AI in all human activities and consequently the ecological danger that this represents.

We are indeed reaching the Earth Boundaries / Planetary boundaries more and more quickly.

Source: https://www.stockholmresilience.org/research/planetary-boundaries.html

The observation is clear, human activity accelerates the degradation of our planet which characterizes the collapse of biodiversity. We are talking about Humans are constantly ruining biodiversity and global warming has as a consequence that disasters are more powerful, more frequent.

We are now talking about the “sixth extinction”
This mass extinction has already begun and it is the title of a book, Pulitzer 2015 by Elizabeth Colbert, an American journalist, who conducted an investigation into the environmental crisis and its impact on life on Earth. To do this, she goes back to the origins of life, shares knowledge on the great extinctions – at the top, that of the dinosaurs – and meets scientists for a current inventory of endangered species.

A quote from Upton Sinclair, quoted by Tristan Nitot duing the conference was enlightening. It explains our blindness to the ecological risks linked to AI.

It is difficult to get a man to understand something, when his salary depends on his not understanding it.

Upton Sinclair, I, Candidate for Governor: And How I Got Licked

I had already mentioned in a previous post that AI was an “Energy Hog”. AI, through its energy consumption, is bringing us into a new era.

Tristan insisted on the fear of being overtaken that pushes us to use AI. The #SaveThePlanet vs #SaveMyJob dilemma

Indeed, energy needs are enormous following the dual revolution of mobility and AI.

What solutions are there to counter this AI phenomenon?
Rightly so, the question arises in the face of this dilemma. What can or should we do? Tristan proposes two, simple ones.

  1. Solution #1: Ban AI. LOL 🙂
  2. Solution #2: Raise awareness about the rational use of AI. In short, do not use an LLM to solve all problems, especially crappy problems like writing an email with a general LMM like ChatGPT. Which is equivalent to going to get your baguette with a private jet in terms of carbon impact, so French situation. Tristan recommends using simple algorithms that are less energy- and storage-intensive for many problems, rather than bringing out the heavy artillery of Mistral, Claude, ChatGPT and other Perplexity at any time.

With sometimes some misunderstandings on both sides on the level of understanding, between the keynote speaker and the listeners, journalists for the most part. For example, this gave rise to a lunar exchange on the use of the “random forest classifier” or the “random forest regressor”. The question asked by the listeners was “Where can we find your random forest classifier tool?” as if it were an off-the-shelf software and Tristan had to explain, a little embarrassed, that it is above all a statistical method!

We must not neglect the fact that the era is one of consumerism and radical utilitarianism. Which means that in a caricatured way, we are all and always looking for immediate turn-to-key solutions whether for this famous “random forest classifier” or any other AI solution. We do not want to think or work, we want everything, right away and now, without effort.

To put it prosaically, the LLM tool is often oversized compared to most needs and the use of AI, especially LLMs, for useless purposes is legion. Me first.

Tristan mentioned the fierce competition between players in the emerging AI market. We know that the virtues of competition are to lower prices to offer quality-price to consumers but above all to kill competition by reducing the price to such a level that the competition by selling at a loss can no longer keep up. Given the investments that AI requires, the more richly endowed you are, the more likely you are to emerge victorious from this “natural selection”. This is the famous one, which requires moving from an emerging, healthily competitive market to a monopoly or oligopoly, where price agreements and collusion are rife for the greater benefit of investors and not of the consumers that we are.

The phrase “The Winner Takes It All”, besides being a pop song by the group ABBA, sums up the current situation on the AI market quite well. I also quote this Highlander Quotes “There can be only one”

In summary, “Who among DeepSeek, Mistral, ChatGPT, Grok, Claude… will win, i.e. dominate the market?”, since in the end it is unfortunately only a question of big money!

We can already see the market being structured and the price supposedly reflecting the equilibrium point is, for a good year, $20 per month for the high-end and $10 for the mainstream product and always the free one to hook the customer in order to milk him better afterwards.

AI Services Comparison Table

NamesPriceModel
ChatGPT Plus$20.00GPT-4o (Aug ’24)
ChatGPT FreeFreeGPT-4o (Aug ’24)
Claude Pro$20.00Claude 3.5 Sonnet (June)
Claude FreeFreeClaude 3.5 Sonnet (June)
Gemini Advanced$19.99Gemini 1.5 Pro (May)
Gemini FreeFreeGemini 1.5 Flash (May)
Poe Pro$19.99GPT-4o (Aug ’24)
Poe FreeFreeGPT-4o (Aug ’24)
Perplexity Pro$20.00GPT-4o (Aug ’24)
Perplexity FreeFreeSonar 3.1 Small
Microsoft Copilot FreeFreeGPT-4o (Aug ’24)
Meta AIFreeLlama 3.1 405B
Grok$8.00Grok-2
Mistral Le ChatFreeMistral Large 2 (Jul ’24)
HuggingChatFreeLlama 3.1 70B
Character AI Plus$9.99Character AI
Character AI FreeFreeCharacter AI
ChatGPT Free (Logged Out)FreeGPT-4o mini

Source: https://artificialanalysis.ai/insights/chatbots-comparison

This equilibrium price is not the result of chance or the natural result of market forces. No, it is the result of a subtle marketing strategy known in economics as well as in human psychology.

The prevalence of subscriptions priced around $10 or $20 is the result of carefully crafted psychological pricing strategies. This price range is considered an acceptable “pain point” for most consumers—low enough not to trigger deep reflection on the service’s value, yet high enough to generate significant revenue for companies.

  1. Insidious accumulation: Consumers often underestimate the total cost of their multiple monthly subscriptions.
  2. Anchoring effect: Low initial prices create a psychological reference point, making future increases more acceptable.
  3. Status quo bias: The perceived difficulty of changing services encourages consumers to remain subscribed, even when value decreases.

One last point, on another well-known phenomenon, is the vigilance to be had with regard to the rebound effect or Jevons Paradox. Indeed, the more the AI is economical or even expensive in energy resources, the more we will put it everywhere.

The rapid development of AI technology, particularly Large Language Models (LLMs), has made AI tools increasingly efficient, affordable, and accessible. Following the Jevons paradox pattern, this increased efficiency hasn’t reduced resource consumption – instead, it’s dramatically increasing the total computational resources and energy used in AI development and deployment.

QED = quod erat demonstrandum !

For this post also, you can find all files and prompts, on my GitHub account. Source: https://github.com/bflaven/ia_usages/tree/main/ia_kpi_carbon_footprint_deepseek

The illustration are made with Mistral https://mistral.ai

3. Selecting Tools to find KPI for AI Carbon Footprint

There are ultimately few tools to measure the carbon footprint of AI, which attests once again to the fundamental contraction between the two objectives of decarbonizing the economy and promoting economic growth with artificial intelligence and the lack of interest in the ecological cause.
We can therefore better understand why Trump and Musk have made drill baby, drill baby a campaign slogan.

You can check the prompts and the files to measure AI Carbon Footprint. See https://github.com/bflaven/ia_usages/tree/main/ia_kpi_carbon_footprint_deepseek

3.1 Using CodeCarbon


CodeCarbon is a Python package that tracks the carbon dioxide emissions from running code, whether on personal computers or in the cloud. It helps developers reduce their environmental impact by suggesting code optimizations and recommending regions with cleaner energy sources for cloud hosting.

Few ressources for CodeCarbon

3.2 Using Eco2AI


You can check the following file on my github account 014_carbon_footprint_eco2ai.py

See https://github.com/bflaven/ia_usages/tree/main/ia_kpi_carbon_footprint_deepseek/ia_kpi_carbon_llm

3.3 Using carbontxt.org


Using the Green Web Foundation carbon.txt validator

3.4 Other tools


Here is the extended search, on github, with the tag “co2-monitoring” and in Python. It is much a concern.

https://github.com/topics/co2-monitoring?l=python

On AZURE few other ressources, to tackle the ecological issue.

  1. Azure Sustainability Calculator and Cloud Carbon Footprint (open source) for Azure. Source: https://azure.microsoft.com/en-us/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/
  2. Azure Emissions Impact Dashboard. the latter allows to “Calculate your cloud-based carbon emissions today with the Emissions Impact Dashboard for Azure”. Source: https://learn.microsoft.com/en-us/power-bi/connect-data/service-connect-to-emissions-impact-dashboard

Another initiative can be reported the JavaScript library CO2.JS from thegreenwebfoundation.org

Source: https://developers.thegreenwebfoundation.org/co2js/overview/

4. How-to analyse the RSE impact of IA

Some insights from IA (Claude, Mistral)

# start the prompt
As an RSE consultant, API architect, and Python developer, can you answer the following questions with a dual objective:

On my my github account, you can check the prompt in my github . See 002_ia_kpi_llm.md, 002a_ia_kpi_llm.md

1. DIAGNOSIS
Key points to consider:
– Microservices architecture with FastAPI allows granular measurement per endpoint
– AI models (Whisper, NLLB, Mistral) have different carbon footprints depending on:
– Model size
– Type of calculation (CPU vs GPU)
– Inference duration
– Volume of data processed
– Exponential growth of the volume requires a scalable architecture
– Multilingual aspect multiplies the impact by the number of supported languages

2. STRATEGY

Existing solutions (Buy):
– Cloud Carbon Footprint (open source) for Azure
– Azure Emissions Impact Dashboard (native Azure)
– CodeCarbon (Python package)
– CO2.JS (JavaScript library)

Custom solution (Build):
I propose to create a custom monitoring system that integrates directly with FastAPI.

3. ACTIONS TO BE TAKEN (BUILD)

(a) Phase 1: Monitoring Infrastructure
– Implement the basic monitoring system (code provided above)
– Add unit and integration tests
– Set up a detailed logging system
– Configure alerts on defined thresholds

(b) Phase 2: Decision Matrix
– Develop a scoring system by endpoint
– Implement automated decision rules
– Create a visualization dashboard

Here is the decision matrix to be implemented:

(c) Phase 3: Continuous Optimization
– Set up an automatic feedback system
– Implement auto-scaling based on carbon footprint
– Develop periodic reports

(d) Phase 4: Documentation and Training
– Write technical documentation
– Create user guides
– Train teams to use the tools

5. Using DeepSeek

The emergence of DeepSeek on the Pose Model market with a redoubled acuity the question of sovereignty over models, the question of foreign interference in the information system of a nation constituted in particular in Europe like France or Germany for example.

DeepSeek is an LLM manufactured under constraint using older chips, the export embargo on chips is more or less hermetic but as I believe frugality is a source of creativity! And DeepSeek has just proven it in a brilliant way. This startup and its model have circumvented the chip problem. Circumstances command as the military say and their approach in Chain-of-Thought Prompting mode although hyper energy consuming has produced astonishing results in terms of inference. I installed their open source DeepSeek r1 model via Ollama, a sign of the times, I uninstalled The 7B model from Mistral on this occasion, without regret because DeepSeek is really impressive.

Naturally, this recomposition in the models had a devastating impact on the American AI ecosystem that was beginning to take shape. Bim, DeepSeek seriously unscrewed the market capitalization of Nvidia, AMD, Broadcom… etc. basically it is not only a geo-strategic weapon it is also a weapon of mass destruction of American investment plans on AI.
Clearly, DeepSeek by its very existence burst the nascent speculative bubble around Tech values. However, we know that if confidence in stock market values erodes, the market correction risks being very severe.

The same will happen in all sectors, including the media, in France as elsewhere. Every entity in the economic world, starting with businesses, is expected to have a strategy regarding AI. AI completes the cycle of this commodification and financialization of the economic world.

On the diplomatic level, DeepSeek is the juggernaut of the Chinese policy of influence in economic and strategic matters for the current Armageddon. At stake, nothing less than world domination and which aims at the annihilation of adversaries. We better understand the rise in power of the bellicose and warlike speeches of Trump/Musk against Xi Jinping.

Source: https://www.lefigaro.fr/secteur/high-tech/deepseek-severe-correction-boursiere-a-wall-street-nvidia-plonge-de-17-20250128

It is possible to correct the political “errors” of the DeepSeek model while it is not possible in the case of other LLMs by ChatGPT, Claude, Gemini… to correct what one could of the “political” biases indeed. Be careful, I am not a conspiracy theorist, I am just trying to think in terms of sovereignty, foreign interference, influence and geostrategy. See more clearly in the opposition for the moment binary between an Atlanticist vision of the world with chatGPT, Grok, Perplexity or a Sino-centric DeepSeek vision while waiting for other generalist LLMs emanating from the BRICS eg the association of Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates.

We know since Clausewitz that war, whether economic or otherwise, is not an isolated political act, unrelated to daily political practice, it is its logical extension, it is another form of politics.

“War is an act of violence whose objective is to force the adversary to carry out our will”. Clausewitz

“War is only an extension of politics by other means”. Clausewitz

Indeed, we are entitled to ask ourselves, especially when we work in the media like me, does ChatGPT not deliver an Atlanticist vision of the world, Mistral a Euro-centric vision of the world?

Or only, for the moment, with the exception of Mistral distributed in open-source, only DeepSeek, has cleverly offered the possibility of being “Fine-Tuned” and therefore of becoming the Trojan horse in all Fortune 500 companies, eager for autonomy but fond of AI but also of countering the hegemonic aims of the USA on AI?

This is what is called AI governance but in fact it reveals a reality that I believe is more prosaic. Many specialists I met told me that we could “Fine-Tune” DeepSeek models. Which of course I immediately tried to do rather than just sticking with the model.

6. DeepSeek Fine-Tuning: Revolutionizing Custom AI Model Development with Python

The best post I found on the subject is this one
“DeepSeek Fine-Tuning Made Simple: Create Custom AI Models with Python”

Source: https://pub.towardsai.net/deepseek-fine-tuning-made-simple-create-custom-ai-models-with-python-7b98f091c824

We need to seriously sort things out, because the whole web is buzzing with news about DeepSeek. We see hundreds of articles like “What is DeepSeek and why is it disrupting the AI sector?”*

What is the post about: Developers and AI enthusiasts, of which I am a part, can now streamline their machine learning workflows with DeepSeek’s simplified fine-tuning approach.

This powerful Python-based solution removes the traditional complexities of creating custom AI models, making the process more accessible to both beginners and experienced practitioners.

By combining intuitive programming interfaces with robust model optimization capabilities, DeepSeek enables users to efficiently adapt pre-trained models to their specific needs, opening new possibilities for specialized AI applications across various domains. Whether you’re working on natural language processing, computer vision, or other AI tasks, DeepSeek’s fine-tuning framework provides the tools needed to achieve high-performance results with minimal technical overhead. That’s a great promise.

At the same time, we are definitely real sheep, me first, to naively take these articles on DeepSeek at face value. Promotional campaign which is the fruit of the incredible mobilization of the “useful idiots” or real Chinese influencers who praise the merits of this Chinese LLM. Articles written either with the confounding naivety of the idealist or with a subtle and good-natured hint of pro-Chinese nationalism.

Finally, the “AI propaganda” which is now being done on the American side is less and less subtle. Let’s say that Machiavellianism is now rather on the Chinese side. The Americans had accustomed us to better but there frankly the strings are becoming crude although I still keep all my admiration for them!

* Check this editorial prompt and the output from Claude.

# prompt
As an editorial editor, make 10 editorial propositions for a post title on DeepSeek and the topic "What is DeepSeek and why is it disrupting the AI sector?"


# output from Claude
# HOLY COW you may find some existing posts title...

Here are 10 editorial title propositions, crafted to be engaging while remaining informative:

1. "DeepSeek: The Open-Source Challenger Taking On Big Tech's AI Monopoly"

2. "Inside DeepSeek: How a 7B Model Is Outperforming GPT-4 in Code Generation"

3. "The DeepSeek Revolution: When Smaller Models Pack a Bigger Punch"

4. "From Code to Conversation: Why DeepSeek's Dual Models Are Turning Heads"

5. "DeepSeek vs GPT-4: David Takes On Goliath in the AI Arena"

6. "Beyond ChatGPT: How DeepSeek Is Rewriting the Rules of AI Accessibility"

7. "DeepSeek's Open Source Gambit: Democratizing AI One Model at a Time"

8. "Why Developers Are Flocking to DeepSeek: The New Star in AI Code Generation"

9. "DeepSeek: The AI Underdog That's Giving Tech Giants a Run for Their Money"

10. "The DeepSeek Effect: How One AI Model Is Reshaping Industry Expectations"

These titles are crafted to balance newsworthiness, intrigue, and informational value while avoiding clickbait. Each highlights a different aspect of DeepSeek's impact on the AI landscape, from its technical capabilities to its broader industry implications.

DeepSeek-R1 is an AI model that developers can customize through fine-tuning to serve specific purposes. This guide shows you how to adapt the standard DeepSeek R1 model into a specialized tool for your particular field. Fine-tuning allows the model to give more precise and targeted responses that match your requirements. The model excels at analytical and problem-solving tasks thanks to its advanced reasoning capabilities.

Source: https://pub.towardsai.net/deepseek-fine-tuning-made-simple-create-custom-ai-models-with-python-7b98f091c824

Source:

Ollama commands reminder

To play with deepseek, a reminder of principles commands to use the model with Ollama.

# NOPE command to the file is heavy for my computer
ollama run deepseek-r1:7b
# YEP command to the file is light enough for my computer
ollama run deepseek-r1 ()


# To run and chat with Llama 2
ollama run llama2
ollama run codellama:7b
ollama run deepseek-r1:latest



# To run and chat with orca-mini
ollama pull orca-mini
ollama pull codellama:7b
ollama pull deepseek-r1:latest


ollama run orca-mini
ollama run codellama:7b
ollama run deepseek-r1:latest

# remove a model
ollama rm llama2
ollama rm orca-mini
ollama rm codellama:7b
ollama rm codellama:7b-python
ollama rm mistral:latest
ollama rm deepseek-r1:latest


# list the model
ollama list


# when you are in the model you can use
>>> /?
>>> /list
>>> /set verbose

# to get out from the model
/exit

For more information, check the official website: https://ollama.com/

NOTE: I hesitated to publish these two chapters which are a bit crazy but I decided to do it after all I came to AI through language and NLP and then it doesn’t harm anyone except maybe me. It was the name DeepSeek that made me tilt, I looked for the cultural melting pot that prevailed in the choice of this name. What storytelling could we draw from this brand? So I then reviewed two other brands in the sector: Mistral and Claude. These remarks are above all delirious and humorous.

7. What does the name mean? Fun Semantic study for IA leader brands (DeepSeek, Mistral, Claude)

I am a voracious reader. For those who know, I am fan of the “Boloss des Belles Lettres”. I believe like René Char that : “The words that will emerge know things about us that we do not know about them.”

This quote from René Char that I love is applicable to brands, it goes without saying. Let’s play a little and start with a semantic game.

“The brands that will emerge know things about us that we do not know about them.”

Names say a lot about the psyche of their creators. There is not only branding, marketing, it is also the product of a culture and a weapon in the battle, which is happening before our eyes, for the Cultural hegemony*.

*For more information on Cultural hegemony, check wikipedia: https://en.wikipedia.org/wiki/Cultural_hegemony

The word “DeepSeek”

This is a very serious name, without being too mistaken, I think of the affiliation to Deep Blue.
Source: https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)

According to me, the name DeepSeek also refers to the concepts of Deep Sea or Deep State and its famous offspring Deep Throat from the time of Watergate. A world populated with Melanocetus johnsonii*. This word unconsciously stimulates our attraction/repulsion and therefore our fascination with the secret and the obscure, that is to say the unconscious and the mystery.

DeepSeek is a subject both by its very name, of fear and curiosity. This word is also that of paranoia, intelligence, or conspiracy. The aura of this word in English fits perfectly with a surveillance society or surveillance capitalism project which is no doubt the Chinese political project.

*For more information on Melanocetus johnsonii, check the website: https://elpais.com/clima-y-medio-ambiente/2025-02-12/muere-el-pez-diablo-avistado-en-aguas-de-tenerife-de-aspecto-feroz-cabia-en-la-palma-de-la-mano.html

The word “Mistral”

What to think of Mistral? The product of “French” AI. Mistral is a southern wind. Its name evokes at the same time the local anchoring but in a global world. It is an invitation to travel. A “road to adventure” if we continue the maritime metaphor, making you a modern Ulysses taking the sea of AI. A starting point for an odyssey to write, an embarkation towards adventure and peril from which we will emerge victorious. Aristotle said, “There are three kinds of men: the living, the dead, and those who go to sea.” This is one of the unconscious promises sold by the creators of Mistral.

Mistral also brings us back to nature, as the Cloud did in recent times. Reintroducing nature into modernity is an attractive idea, although, as we have seen previously, AI will probably accelerate the destruction of nature. It is a bold syncretism between modernity in its purest form, AI, and the vague idea that we have of nature as a divinity, the Gaia/Pachamama combo. As when we were sold the Cloud, these little clouds up there in the sky are doing little calculations. Is this the return of the sacred knocking at the door? Is it a small push of verticality, of transcendence for the Old One who according to Einstein does not play dice? This is the return of the Determinism.

The exact quote of einstein is this one:
“The theory produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.”

Let’s say that the Mistral that concerns us probably takes over the dimension of harmless, enveloping nature, by becoming an entity, ethereal, aerial and above all elusive, where the causal link between AI and pollution is almost naturally dissolved.

At the same time, and if we dig into this “voice” of the South.
Mistral is also the wind that sweeps Marseille and there we dive into another universe.
It’s funny… A kind of pagnolade where Pique-Bouffigue is an AI developer, Marius, Fanny and César are doing LLMs. We are far from the “cold” technique. We dive into the warm humanity of the mythical and folkloric South, between bouillabaisse, pastis and red mullet in papillote. A way of life with the little cat “le Chat” who sleeps at the door of the “mas”, a new farmhouse where the computing servers are humming.

“Vé, Mistral, c’est pas la “teknologie” IA qui sent bon le thym, la sarriette et la farigoulette”.*
* “Hey, Mistral is an AI that smells wild thyme, savory and farigoulette”.

The word “Claude”
For a little Frenchie like me, we perceive the literate Americans, fascinated by luxury, in the French way. Always, personified the brand, Claude and his sonnets sound very harmless and very poetic. A sort of minstrel from the Middle Ages who travels from castle to castle. I mean company from company where he hands out ai-generated sweet words and script to young ladies and gentlemen from greenwashed startups.

That is almost the same background as you can face for Laravel, the PHP framework.

What about the rest?
What do you think about “Grok”, “Perplexity”… Not much really. Anyway, thinking about these words is also just a reflection of my own culture so it’s just fun. Still, it should be taken as a serious subject. For instance, in Germany, it’s not a game at all. The “Unwort des Jahres” commission aims to distinguishing the “Unwort” from the “Wort”, the word and the non-word. Which can be applied for brands of course.

Let’s say “Wort” or word is what the “I” means and “Unwort” what the “NON-I” means, its negative, its repressed, sort of speak 🙂 In germany, it’s more of a serious game than anything else.

8. Unwort des Jahres vs Wort des Jahres

What a strange tradition, but there are plausible historical explanations. After all, Germany is both a country of great culture, especially philosophical culture, and also the cradle of a unique European fascism that has deeply soiled the German language. Germans therefore have every reason to be wary of words.

This is how the “Unwort des Jahres” was born, becoming a tradition in Germany in the same way as the “wort des Jahres”. Quite a few words have been distinguished, often representative of the situation and even more of the psyche of the German masses, of the “das Es” to speak Freudian. We had “Lügenpresse”, “Ausländerfrei”, “Entlassungs produktivität” or the very recent “Biodeutsch”.

This German language is well dissected in the program Karambolage on ARTE which offers an amused comparative anthropology of the natives, on both sides of the Rhine, therefore French and German.

For more information, check ARTE: https://www.arte.tv/fr/videos/RC-014034/karambolage/ or the website: https://de.wikipedia.org/wiki/Unwort_des_Jahres_(Deutschland)

Videos to tackle this post

Measuring AI’s Carbon Footprint and Impact with CodeCarbon & Eco2AI

How to Measure the Carbon Footprint of Your Local Mistral-Poet LLM Running on Ollama with CodeCarbon

Sustainable AI: Tracking Carbon Footprints of Mistral Models with Ollama & CodeCarbon

More infos