


Meeting GenAI App
Use the Power of Generative AI to summarize meeting and next steps
Posted by Maya Sandler on August 28, 2023
Intent
I was interested learning what can Generative AI Large Language Model (LLM) text prompting be used for, and decided to play with Google's Vertex-AI, which has multiple LLM models available for the user that are good for specific tasks (from summarization to ideation, coding help, and more). The prompt you use to execute the model will drive the nature of task.
​
I personally often need to extract information from meeting transcripts and long emails, and to create a summarized outputs. The format I like best includes: (1) a five-six sentences summary, (2) the major concerns that came out in this meeting or email, and (3) the action items or next steps. As these are all summarizing abilities, I used ‘text-bison’ model in VertexAI language models library. No fine-tuning was needed.
​
As these are three different prompts for the same text, it would be much easier to create a short SDK Python code, load the text file, run multiple predictions and save the multiple results in a single output file. Therefore, I built this script in a way that one can load a file, run the prediction, and download the result to a text file. The prediction prompts were comprised of the outcome request and the text itself.
Here are examples I used:
- “Summarize the meeting in 5 sentences according to the following text.”
-
​“Give 5 top concerns that comes out from this conversation.”
-
“Give 5 recommendations on what to focus on from this conversation.”​
​
Another feature I really wanted to add to this app is a simple GUI to select the text file and download the prediction output. It is OK to write the full path or give a file name through the code. However I wanted to make it a lot more flexible with minimal changes to the code. This was not a fun task (but it works!), and I'll talk more about it in the appropriate section.
​
The final code can be found in my GitHub repository. I conclude the project with my conclusions and insights from my analysis.
Cleaning the Data
As in any data job we need to clean the data. However, in this case it is a bit different.
First, as these are public servers, unless used in a VPC, and will feed the model and stay in the cloud forever (and will be searchable), a couple of cleaning processes need to be done to remove PPI (Private Personal Information), remove internal company/personal information, and replace names of people and companies with other words in the same meaning like ‘company-1’, ‘team’, ‘person-1’, etc.
​
Second, as the text will sometime be a just plain text or have a distinct structure, it is best to change it to include bullet points or even better, state who is saying what (client/team, company-1/company-2/consultant, etc.).
​​
Third, the text should be saved as a .txt file to be then loaded and processed by the prediction model.
Processing the Data
I chose used a Python code to read the data from a raw .txt file, prompt the model and create a final text file that includes the three prompt results in a dingle text file.
​
I used google-cloud-aiplatform, vertexai and TextGenerationModel from the vertexai.language_models library to enable Vertex AI API text prediction calls. In addition to theprediction module, I used jupyterlab-widgets, jupyterlab and jupyter labextension for working in Jupyter Notebook; IPython.display to display images in Jupyter Notebook, ipywidgets to enable load-text UI; and IO to read files from a given file path.
​
To execute the generative AI predictions, I chose text-bison@001 LLM model as it deals well with different types of text summary and exracting
Loading the Text to the Model
Python has a built-in functions for reading and writing text files. I could have written the code in a way that will specify the full path of the text file or place it in the cloud and give a URL. However, when working with VertexAI you need to load the Jupyter Notebook the their service and I did not want to start loading all the text files to be analyzed to the cloud. I wanted something simple, like a UI, that will allow me to choose the files I wanted to load from anywhere, even from my local computer.
​
I have to say this approach was not easy to implement, as ipywidgets library is not ideally supported, some things did not work as expected, and it does not run from VSC and can only run from Jupyter Notebook online.
​
display(upload) function creates the UI to select a text file from your computer.

After clicking the "Upload" button and selecting the file via the UI, this snippet will transform it into a string in-memory. Basically, what this library does is transform the text file into bytes of 0s and 1s and then transforms it into a string, which is value variable.
You can print(value) to see the string to make sure you loaded it correctly.
​
Now that we loaded the file to memory, we can create the prompts and run the predictions.
Building the Prompt Structure
The prompt is built from {the prediction needed} + "Text: " + {the string of the text file} + {the appropriate action}.
As there is a single text string from the text file, and multiple actions required I decided to create a dictionary to hold the actions and their appropriate prompts. In this way I may use this base code to elicit different prompts or use a different order of prompts for the final text result. For this purpose I create promp_dict and appended the needed compartments in each step. Later I executed the Gen-AI prediction on the the promp_dict only for the actions that I selected in the prompts_to_use list
​
1. Creating a prompt_dict Dictionary with Multiple Actions:
I selected four actions for applying on the text: 'Summary', 'Concerns', 'Positives' and 'Recommendations'. Each has its own prompt text. For example, the prompt of the action 'Concerns' is 'Give 5 top concerns that come out from this conversation.'
2. Selecting Predictions to Execute:
The app gives the option for multiple prompts: 'positives', 'Summary', 'Concerns', and 'Recommendations'. However, we do not always want all the prompts to be executed. Therefore, there is a need to specify which prompt types we want to execute. This is done in prompts_to_use list. Change according to need;
3. Adding the Full Prompt for Selected Actions:
For each action that was selected in the prompts_to_use list, the dictionary added a full prompt, built as stated before. As an example, 'Concerns' action full prompt will look like this:
'Give 5 top concerns that come out from this conversation.
Text: {string from file}
Concerns:'
Executing the Selected Predictions
1. Execute the prompts in the generative AI model
The multiple prediction prompts executed according to the selected prompts_to_use on the items in the dictionary. The results were appended as well to the dictionary.
I have decided to executed the model using these parameters to decrease the creativity of the output: temperature=0.2, max_output_tokens=1024, top_k=40, and top_p=0.8.
2. Gluing the Results Together:
As now, I have the result of each prompt in the prompt_dict dictionary, to create a single text file will need to add the results strings to a single one. The fact that I used a dictionary to run and save the result of the prediction executions, helps me here as well:
4. Saving the output to a text file
As the execution output is created in the web and we want to save the meeting recap locally (in this case) the easiest way to save it is by creating a text file, save the meeting recap string into the text file and create a URL for this file.
To download simply click CTRL+S on the keyboard to save the URL as a local text file.
We will then see this prompt to click the newly created URL:

Conclusions
In this project I created a small script for meeting recap that can include multiple changing prompts, according to the need. The script uses Google Generative AI API and calls the prediction multiple times using different prompts on the same meeting text.
I have some insights that I think are important to note:
1. Google Vertex AI ​SDK is much easier to use compared to their GUI, especially as I needed to call the API multiple times for this task.
​
2. Cleaning the data from PPI, company info, etc. takes most of the time and effort and is crucial to this process as the data must be clean before executing the predictions.
​
3. Confirm that harmful language is not used in the text. Google has guidelines for any intent or word it may suspect a harmful. I personally had an issue with 'company xxx' as 'xxx' was suspected as harmful. If you receive a blank output in my execution, this might be the case and you will have to debug.
* A good way to know what is wrong with the execution is changing the code to .__dict__ in order to see the output attributes instead of printing the output.
4. Different prompts give different outputs. Experiment with different wording of the task and be specific if needed. This specific code executes a 'zero-shot' prediction (i.e. no examples needed to elicit a result). However, one might want a specific format of output, and will need to use 'one/several-shots' and add several examples to get the desired result.
​
5. Playing with the creativity parameters (Temperature, top_k, top_p) will change the output.
​
6. Always validate the results. Generative AI may create hallucinations as outputs and therefore, it is highly recommended to do a reality check to test the output of the model.