Evaluating AI-Generated Text: Semantic Similarity with Sentence Embeddings

In this video, we demonstrate how to use Streamlit to query the Mistral LLM locally with the help of Ollama. We’ll show you how to input a prompt and output the result into a file. As a journalist, you can cut and paste content, a title, and a set of keywords to create a gold standard. This will then be sent to a Mistral-type LLM via a prompt to generate 10 alternative titles and a set of 10 keywords.

We’ll cover the following steps:
1. Generate the output from the LLM using the script `024_ia_ollama_streamlit.py`.
2. Append additional variables to the output from the LLM using the script `027a_ia_ollama_streamlit_append_files.py`.

You can read the article on my blog: Content Quality: How Sentence Embeddings Can Save AI-Generated Content and some other concerns on AI: Environmental Impact, Job Loss
https://wp.me/p3Vuhl-3mv

You can listen to the “podcast” of Blog Post Audio made with NotebookLM on this post: https://on.soundcloud.com/61ufT6BNt4WzhvwX7

The code is available on my github account: https://github.com/bflaven/ia_usages/tree/main/ia_spacy_llm

Related Content

15th Nov 2024

Content Quality: How Sentence Embeddings Can Save AI-Generated Content and some other concerns on AI: Environmental Impact, Job Loss


More ressources

Tag(s) : , , , , , , , , , , , , , ,